2024-03-28 IAWG Meeting Notes

Meeting Status Metadata

Quorum

quorate

Notes-Status

approved

Approved-Link

https://kantara.atlassian.net/wiki/spaces/IAWG/pages/440303618

The meeting status metadata table is used for summary reports - copy the status macros from the table in these instructions:

Quorum: quorate not quorate

Notes-Status: drafting Ready for review approved

Approved-Link: Insert a link to the Meeting Notes page holding the approval decision for this notes page

Agenda

  1. Administration:

    • Roll call, determination of quorum.

    • Minutes approval 

    • Kantara Updates

    • Assurance Updates

      • ISO SC27/WG5-meetings are in two weeks time.  There appears to be some movement with ISO 29115 and 29003.  NIST has resumed interest in updating these as part of the 800-63 team looking across the board and figuring out the internationalization side.  AHughes will report back any action.

      • Spring conference season is approaching.  Kantara is 15 years old-planning a few “birthday parties”

      • Rev 4 second draft should be released in spring

  2. IAWG Actions/Reminders/Updates:

  3. Discussion:  

  4. Any Other Business

 

 Attendees

Voting: Andrew Hughes, Michael Magrath, Richard Wilsher, Yehoshua Silberstein, Jimmy Jung

Nonvoting: Sarath Laufer

Staff: Amanda Gay, Lynzie Adams

Guests:

Quorum determination

Meeting is quorate when 50% + 1 of voting participants attend

There are <<7>> voters as of <<2024-03-28>>

 

Approval of Prior Minutes

Motion to approve meeting minutes listed below:

Moved by: Jimmy Jung for unanimous consent, no opposition.

Seconded by: N/A

Link to draft minutes and outcome

Discussion

Link to draft minutes and outcome

Discussion

Motion carries.

 Discussion topics

Time

Item

Presenter

Notes

Time

Item

Presenter

Notes

 

Continue discussion on second criteria question #0180 (superior v. strong evidence) with a “tidied” updated version of Richard’s proposed alternative/comparable criteria (sent 2024.03.14)

Richard

  1. Richard’s recap: This is a proposed set of criteria which would allow the use of comparable alternatives in a manner which actually assesses out some criteria which have to be met if you are going to use a comp/alt approach.  The criteria are derived from 63 Rev 3 Clause 5.4–to enable these, there’s a focus in 5.4 which restraints how to determine comparability (functional elements one might choose to replace with something comparable)--would be deployed in both 63A and 63B contexts.  At the time, it was agreed to only think of comp/alts in terms of 63A (just to get things done), 63A, and specifically  to get an alternative form of evidence.  Thus, to invoke or connect to the comp/alt criteria directly, 63A 192 and 212 were inserted: If you are going to have comp/alt, other criteria (700-710) should be fixed, and the provider must meet these criteria. These were written based on 5.4 with an attempt to instill a degree of rigor and process as to how a comp/alt is justified.  An analysis for the risk assessed by NIST would be needed (somewhat of a guessing game).  Then under B-state that this is the alternative approach and the risks that may be introduced through applying this different approach, and look at the risks that may be reduced through the alt controls, and select/implement these controls.  Overall: the alt plan should be as good, the risk analysis should be accepted by top management, and for each comp/alt applied:  make sure the service consumers understand that this comparability examination and risk assessment has taken place.  Documentation of the requirements for the application of the comparable controls so any user can deploy them effectively.  Overall goal: set up a consistent approach that helps the CSPs, the assessors, and the ARB with comp/alts.

  2. AHughes-not sure about risk assessment constraint to only evidence. Not convinced that if we are allowing alternatives of any kind, why are we restricting? 63A700-why are we only talking only about IAL?  Shouldn’t it apply to any criteria set in the group?  Why are we mentioning any specific set of criteria, wouldn’t this apply to any set of Kantara IAF?  

  3. Lynzie-when presented to the ARB, they were in reference to a specific service, would they be applicable to everyone else?  Are they comprehensive enough to publish?

  4. Richard-these were presented to IAWG/ARB 18 mos ago much as they are here.

    1. Been around for some time and used in specific purposes.

    2. They are only stated in 63 rev3–only meaningful applicable for IAL and AAL

      1. OpSac/CoSac-not go there

  5. Jimmy: Perhaps because it may be difficult to achieve. He concurs with having it at the bottom of all 3 spreadsheets (some form of this set of criteria) and if someone can make it through the hoops–go for it

  6. Richard: if we take out the focus on evidence selection and validation, it may open the floodgates to people turning up with things that aren’t really IAL2/AAL2, but we have to assess it because people claim “comparability”

  7. Yehoshua: are we skipping a step in the need for someone to justify how it is comparable? Richard notes-this is A-E.  

    1. CSP has determined it is as good as what NIST expects.

  8. Yehoshua notes that this is a conclusionary statement.

  9. Richard-could combine b and c into one statement regarding analyzing the risk that alt controls may introduce

  10. Yehoshua:  The CSP must do  two things

    1. Make your case for what the NIST control is getting at.

    2. Make your case for how you are trying to do it and how the comparable solution solves the new risks introduced.

  11. Richard: If there is a theory about what the NIST controls are trying to achieve and the residual risks accomplished/measured, then what is the degree of residual risk for the comp/alt?  That is where comparability is determined, because the residual risk of the alt. Approach should be as good (if not better) than the residual from the original controls.

  12. Hughes: NIST has controls with residual risk, we are proposing alternative controls with risks, need to ensure the risks before/after the swap are OK.

  13. Yehoshua: Is the primary concern for NIST risk? Can/should we view comparability only from the viewpoint of risk?  Problematic because we can’t quantify confidence with IAL2.

  14. Andrew Hughes:  863 is written with the concept of risk analysis introduced, however it’s qualitative risk analysis.  Substitutability of control is harder to determine from a risk evaluation point of view.

  15. Yehoshua: Reviewing 8003-63 5.4 and 5.5  https://pages.nist.gov/800-63-3/sp800-63-3.html

    1. 5.4: Focusing on it from the agency’s implementing lens, not from the acceptability of the service providing it.  Can’t be altered based on capabilities, implemented/adjusted based on ability to mitigate risk.  Recommends using 5.5  methodology

    2. Should we use the 5.5 framework for our (Kantara) criteria?

    3. Yehoshua:  Is this a separate discussion of IAWG needing to articulate (publicly) that this one has comparable controls?

      1. SoC 2 is still SoC2 even with exceptions (the report would detail this), but what is the medium to share the comp/alts?  Is the report in the SoCA that is downloadable?

    4. Richard: Perhaps require an acknowledgement/statement/description in the public service description in the S3A that a comp/alt is being used, as this goes into the TSL

      1. Could also insist that the CRP makes it clear that there is a comp/alt being used, so that whoever is using the service would be informed.

      2. Final step could be requiring an explicit statement in the TSL

    5. Richard: Notes 5.4 is about agencies and agencies often don’t build these solutions, they buy them from approved CSPs. So the implementation of the comp/alts isn’t in the agency’s hands, it’s in the CSP’s hands.  This may mean the CSP services are available to anyone.

 

 

 

 

 

 

 

 

 Open Action items

Action items may be created inline on any page. This block shows all open action items from all meeting notes.

 Decisions