B.1 Current threats and attack vectors

This section will cover topics including, but not limited to:

Summary: Deepfake Threats and Attack Vectors

Introduction

Remote identity verification systems face significant risks from the evolving threats of deepfake technology. Deepfakes, are the creation of highly realistic but fake audio, video, and still images, can be used to compromise these systems in various ways. Attackers may use deepfake technology to present falsified identities, modify genuine documents, or create synthetic personas, exploiting weaknesses in the verification process. This summary explores the broad types of deepfake attacks, specific methods such as face swaps, expression swaps, synthetic imagery, and synthetic audio, and the various points of attack, including physical presentation attacks, injection attacks, and insider threats. Understanding these threats is crucial for developing robust defenses against the manipulation of identity verification systems.

Note: Deepfake attacks are a threat to a wide range of services, systems, and domains, from social and traditional media to national security and human rights to banking and access to digital systems.

Here, we will focus specifically on deepfake threats and attack vectors in the scope of remote identity verification.  

Types of Deepfake Attacks

Deepfakes are not “one thing,” but rather a class of digital face and voice manipulation and / or synthesis which can be used to undermine a remote digital identity scheme.  

  1. Still Imagery Deepfakes:

    • Face Swaps: Replacing a person's face in a video with another person's face, often seamlessly.

    • StyleGAN2-Type Synthetic Imagery: Generating highly realistic human faces that do not belong to any real person.

    • Diffusion-Based Imagery (e.g., Midjourney, Stable Diffusion): Creating realistic images from textual descriptions or other input images, making it possible to fabricate convincing identity photos.

  2. Audio Deepfakes:

    • Synthetic Speech (e.g. Eleven Labs): Creating realistic synthetic voices using tools to impersonate someone during a voice verification process.

    • Voice Cloning: Replicating someone's voice to bypass voice authentication systems.

  3. Video Deepfakes (which may include a combination of the still imagery and audio techniques discussed above)

    • Expression Swaps: Altering the facial expressions of a person in a video to match those of another person.

    • Next-Gen Video Avatars (e.g. Synthesia, HeyGen): Creating fully synthetic avatars that can move and speak like real humans, making it hard to distinguish between real and fake identities.

Points of Attack

 The following diagram summarizes key attack points for deepfakes in remote identity verification (IDV). It follows from earlier work on codifying possible presentation attack points in biometric systems as included in ISO/IEC 30107-1:2016 and other industry research, such as that done by Stephanie Schuckers at CITeR.

 

image-20240731-163323.png

 

 

Evolution of Deepfake Attacks

 

No doubt, while significant risks and attack vectors exist today, the threats are growing rapidly due to the remarkable pace of technology advancement, in particular with and around Generative AI. 

  • Improved Visual and Audio Quality: Deepfake technology is rapidly advancing, producing increasingly realistic and convincing fake content that is harder to detect.

  • Personalization and Targeting: Deepfakes are becoming more sophisticated in mimicking specific individuals, making them more effective in targeted attacks.

  • Integration with Other Attack Vectors: Deepfakes are being combined with other cyber attack methods, creating more complex and difficult-to-detect threats.

  • Real-Time Manipulation: Advancements in processing power and algorithms are enabling real-time deepfake creation and manipulation, posing new challenges for detection.

  • Scalability and Behavioral Mimicry: Deepfake technology is evolving to replicate not just appearance and voice, but also mannerisms and behaviors, making detection even more challenging.

Conclusion

Deepfake threats present significant challenges to remote identity verification systems. The broad range of attack vectors, from audio and video to still imagery, and the specific methods employed, such as face swaps, expression swaps, synthetic imagery, and synthetic audio, underscore the sophistication and potential impact of these threats. Points of attack span physical presentation, data injection, and live interactions, highlighting the need for robust security measures and continuous advancements in detection technologies to mitigate the risks posed by deepfakes.

 

: reviewed by: Nicolette Scott