Realistic simulations of human voice, video and text created by “Generative AI” systems have proliferated over the last year. The simulations pose enhanced risk that Identity Proofing and Verification (IDPV) systems will be unable to distinguish real from fake signals. Organizations that rely on IDPV services to prevent fraud or impersonation are experiencing higher number and frequency of fraudulent attempts. This group will research how IDPV systems could be subverted or fooled by “deepfakes”, “Generative AI”, and other AI-related mechanisms. The anticipated output of the discussion group is a report describing the nature of the threats, vulnerabilities, and potential countermeasures. The report is intended to inform purchasers of IDPV services about AI-related techniques that may decrease the effectiveness of IDPV services, and to enable readers to discuss the topic and potential risk mitigation actions within their organization and with IDPV service providers.
These Antitrust Guidelines are for the protection of Kantara Initiative, Inc and its programs on antitrust issues. All programs, working groups and discussion group must follow these Antitrust Guidelines.