🤖 AI Summary
Generative AI–driven deepfake technologies pose severe threats to biometric authentication systems—including facial recognition and voiceprint verification—yet a pronounced gap exists between expert risk assessments and public risk perception, exacerbating systemic security vulnerabilities.
Method: Leveraging a mixed-methods approach (408 expert surveys + 37 in-depth interviews), this study integrates qualitative thematic coding with cross-group statistical analysis to empirically investigate risk cognition disparities across stakeholders.
Contribution/Results: We introduce the first “Deepfake Kill Chain” attack model tailored to biometric authentication, revealing a structural divergence: domain experts (e.g., in finance) exhibit high threat awareness, whereas the general public demonstrates consistently low risk perception. Based on these findings, we propose a three-tiered collaborative defense framework: (1) dynamic biometric signal enhancement, (2) privacy-preserving governance mechanisms, and (3) precision-tiered public education strategies. This work delivers the first empirically grounded roadmap for mitigating AI-enabled identity spoofing, enabling coordinated technological, institutional, and cognitive governance.
📝 Abstract
Generative AI (Gen-AI) deepfakes pose a rapidly evolving threat to biometric authentication, yet a significant gap exists between expert understanding of these risks and public perception. This disconnection creates critical vulnerabilities in systems trusted by millions. To bridge this gap, we conducted a comprehensive mixed-method study, surveying 408 professionals across key sectors and conducting in-depth interviews with 37 participants (25 experts, 12 general public [non-experts]). Our findings reveal a paradox: while the public increasingly relies on biometrics for convenience, experts express grave concerns about the spoofing of static modalities like face and voice recognition. We found significant demographic and sector-specific divides in awareness and trust, with finance professionals, for example, showing heightened skepticism. To systematically analyze these threats, we introduce a novel Deepfake Kill Chain model, adapted from Hutchins et al.'s cybersecurity frameworks to map the specific attack vectors used by malicious actors against biometric systems. Based on this model and our empirical findings, we propose a tri-layer mitigation framework that prioritizes dynamic biometric signals (e.g., eye movements), robust privacy-preserving data governance, and targeted educational initiatives. This work provides the first empirically grounded roadmap for defending against AI-generated identity threats by aligning technical safeguards with human-centered insights.