2024-07-16 DEIA Agenda and Meeting Notes

Agenda

DRAFT Agenda

  1. Admin/Reminders

    1. Charter:

      1. Please make any final comments via email to Jordan/Peter or via Comments on the Wiki Page.

        1. Peter-drafts in the Wiki and Google Docs (some comments in Google Doc). Happy to take first stab at incorporating these comments into Wiki Draft.

          1. Context for new members: Looking to move into a WG, as the “discussion” portion is mostly completed.

          2. ACTION-Peter to take first pass at incorporating comments.

      2. Please indicate your willingness to provide a signature on the Charter (via email).

      3. Submission to the Leadership Council will happen in the next few weeks.

    2. Review the DEIA Survey Questions and Results, provide feedback via email or directly on the Wiki Page

      1. Geraldine-will look to contribute this week.

    3. Criteria for Assessing Efficacy  + Terminology and Taxonomy:  Jordan will begin populating these pages.  Please review and submit comments via email or the Wiki directly.

    4. Complete GPA sign-up and set up Confluence account for add/edit access

    5. Continue to add Resources

    6. Sign up to mailing list (if you haven’t completed a GPA)

    7. Open Call - Presentations for DEIA related programming

  2. DEIA Survey - Question Review - Continued Discussion - Please review the draft 2024 DEIA survey questions prior to the call and forward any edits directly to Peter Davis (equalspeterd@gmail.com)

  3. Review Session on https://kantara.atlassian.net/wiki/spaces/DEIA/pages/233177104

  4. AOB

  5. Adjourn

 

Participants

Voting: Jordan Burris, Peter Davis

Non-voting: Eric Hitch, Geraldine Bradshaw

Staff: Amanda Gay, Kay Chopard

Invited Observers/Guests:

 

Goals

Goal

Status

Date Established

Deadline

Goal

Status

Date Established

Deadline

 

 

 

 

 

 

 

 

The goals status metadata table is used for summary reports - copy the status macros from the table in these instructions:

Status: In Progress Not started Complete

 

 

image-20240321-184653.png Discussion Items

Item

Presenter

Notes

Item

Presenter

Notes

Criteria for Assessing DEIA Efficacy

 

Context: Measuring solutions/outcomes is difficult, as the industry does not act together. Potentially seeking to make a Kantara recommendation.

Goal: Align each line item to a specific category and agree on the “what/how” (description/measurement approach).

  1. Auto-Decision Rate: how often a yes/no was given without needing a manual submission. Helpful in identifying gaps in the approach.

    1. Peter: Resubmit rate is related. There may be a hierarchy that will aid in organization.

    2. Eric: Is this representative of the actual decision? Risk scores vs. accept/reject. Place the decision in hands of customers

      1. Produces some reports/outputs with models.

    3. Jordan: False positives (connect to other items). Perhaps a hierarchy with resubmits with false positives/negatives?

      1. Is it a good idea to recommend that the industry adopt a standard practice?

    4. Peter: FIDO is looking to adopt something for documents (not identity verification). Include the question with the criteria-should we use these for different dimensions of verification?

  2. Abandonment rate: Overhedging that people stay through the entire process?

    1. Peter: Usually divided into 2 piles - lack of access/inability to incomplete the task or insufficient information that triggers additional data collection

    2. Jordan: subcategories (evidence, tech, process burden)

  3. Time to complete: Potential for people to get lazy if it takes longer than a 1 minute (drives abandonment), telling for DEIA impact (or is it only user experience)? How to capture in a metric?

  4. Coverage Accuracy for Demographics: Would a tweak in approach increase coverage? Is everyone else able to do this? Necessary?

  5. Manual review rate: Define measurement.

  6. % of channel referral: what is the channel they are kicked to? Inaccuracies in “end-state” counting (100% completion, but where did everyone start?)

    1. Peter: the channel you go through is a second dimension to these metrics.

      1. Bias of evaluator

      2. Focusing on macros across all channels

  7. Disparity between initial/final adjudication: how often does this happen?

    1. Peter: value in measuring appeals rate, once you know this, then you can measure the reversals v. not reversing

      1. 800-63-compulsory process for this

 

 

 

 

 

 

 

 

 

 

Open Action Items

 

Decisions