Evidence selection and risk acceptance
63A#0700-put to IAWG sometime back, Richard used them, and tagged with 63A with modifications to truly represent 63 rev 5.4
They don’t make any distinction for particular criteria strengths are affected
Derived normative statements from this, aiming to tell people what they have to do (employ comparable alternatives, as long as they are justified)
How to best consider the risks that may be introduced through the use of the alternative controls? There needs to be internal acceptance of the comparative risk that it is a manageable risk.
AHughes: Kantara focuses on evidence selection and validation and NIST is more general (guidance applicable to all requirements) - so are we narrowing this down to just evidence selection?
Richard-NIST has written things that just don’t work in the real world, due to variations that occur
How do we maintain rigor with the constraints currently in place to back it up?
AHughes: AAL/FAL, 5.4 all of this is referenced and covered - will this cause a future assessment issue by narrowing the criteria?
Kantara criteria says IAL applicability, NIST says IAL/AAL/FAL applicability-is that significant?
Lynzie: In the past, the ARB noted this came up in an assessment, and they did not think of these as a requirement. This gave the assessors and CSPs something to judge against, but not that everything would fit perfectly into the box. If an assessor/CSP had a better way to demonstrate conformance, that would be OK (with adequate justification).
Richard-rewrite to make more generic? If it is softened, how to make sure it is just as good as NIST?
Some flexibility is available by saying “not applicable” and then justifying it
Another option: Develop an additional criterion that says “if you are going to use these, you need a good definition of what you are trying to do” (requiring the explicit/exact criteria the CSP/assessor is trying to replace/modify)
Need a definition/scope for the comparability
Define the problem: evidence selection and validation
Yehoshua: Open to a comparable alternative as long as the asessor/CSP can fully document why this slightly different approach works better
Less worried about making it expansive because they have starting point and clear direction to fully explain their approach
Richard: More worried about someone saying we don’t do any of “that” (63A) and offering a long list of alternatives
Yehoshua: Each criteria that had a proposed alternative would need to be identified and and alternative justified
Richard: How far can we allow them to go with alternatives? Justify an alternative for every criteria? There could potentially be consistency issues with assessments.
Yehoshua: Include compensating controls, per 5.4 language
Jimmy: Either we are allowing comparability or not, and the reality is that if someone may want to do 200 comparable controls and pay for that. SoCA already includes a “comparable” columnWe should make the expectations clear that justification will be needed, but we can’t “sortof” allow comparability.
Eric: It’s expected that more rigor would apply in the assessment with comparability
Richard: We should mandate the use of whatever controls we choose to use for alternative criteria to give consistency (a consistent set of inputs) to assessments being presented to the ARB.
Andrew Hughes: Is there a market advantage to giving comparable control, if someone has found an alternative way to do something (a cheaper way)?
Should this be a Kantara concern?
Lynzie: Comparable controls are not published on the Trust Status List
Jimmy Jung: Perhaps the SoCA does need to say a comparable control was used here.
Yehoshua and Mike Magrath in agreement.
Mike M: having a long list of comparable controls could lead to government agencies questioning the assessments
Yehoshua: Is it the level of documentation of how it is comparable?
Andrew Hughes: Is the assessor evaluating what the CSP is delivering or evaluating how well the CSP delivers what the client wants?
It depends on where the client wishes are written (Richard)
Service definition v. letter
Yehoshua: If we separate the function from the form (if something doesn’t have the properties), then it isn’t a comparable alternative, it’s just an alternative.
Eric: Should go back to the risk mitigation component versus specific evidence (if it is tied to specific evidence, it is artificially constrained)
It should go back into the CRPS and the policy that is shared with every client (and understood by the client)--where the assessor would be looking at this as evidence that supports it.
Andrew: So if a client comes to Kantara and proposes a comparable control, and it’s justified via their assessor, but a future customer of the client asks for more compensating controls, do we do a comparable assessment with the client and their customers? Do we always have to do it?
Richard-we don’t want to get involved with customers. We assess whether a service meets its description in terms of policy and conformity to the Kantara published criteria
Comparability seems to be the way to go, but what is the rubric for acceptance/evaluation?