🤖 AI Summary
Audio deepfake detection (ADD) exhibits severe vulnerability to anti-forensics (AF) attacks, posing critical risks to voice biometric authentication and other security-sensitive applications. This paper systematically evaluates the robustness of mainstream ADD methods against AF attacks across five benchmark datasets. It presents the first unified assessment of both statistical perturbation–based and optimization-driven adversarial attacks—including FGSM, PGD, C&W, and DeepFool—as well as their compositions with common AF techniques (e.g., pitch shifting, filtering, additive noise, and quantization), covering both waveform- and spectrogram-based detection paradigms. Experimental results demonstrate substantial performance degradation of existing detectors under these attacks. The core contributions are: (1) an empirical identification of model generalization bottlenecks under AF perturbations; (2) a standardized evaluation framework specifically designed for AF-resilient ADD assessment; and (3) evidence-based design principles for developing interference-resistant, evolvable robust ADD systems.
📝 Abstract
The widespread use of generative AI has shown remarkable success in producing highly realistic deepfakes, posing a serious threat to various voice biometric applications, including speaker verification, voice biometrics, audio conferencing, and criminal investigations. To counteract this, several state-of-the-art (SoTA) audio deepfake detection (ADD) methods have been proposed to identify generative AI signatures to distinguish between real and deepfake audio. However, the effectiveness of these methods is severely undermined by anti-forensic (AF) attacks that conceal generative signatures. These AF attacks span a wide range of techniques, including statistical modifications (e.g., pitch shifting, filtering, noise addition, and quantization) and optimization-based attacks (e.g., FGSM, PGD, C & W, and DeepFool). In this paper, we investigate the SoTA ADD methods and provide a comparative analysis to highlight their effectiveness in exposing deepfake signatures, as well as their vulnerabilities under adversarial conditions. We conducted an extensive evaluation of ADD methods on five deepfake benchmark datasets using two categories: raw and spectrogram-based approaches. This comparative analysis enables a deeper understanding of the strengths and limitations of SoTA ADD methods against diverse AF attacks. It does not only highlight vulnerabilities of ADD methods, but also informs the design of more robust and generalized detectors for real-world voice biometrics. It will further guide future research in developing adaptive defense strategies that can effectively counter evolving AF techniques.