What You Read Isn't What You Hear: Linguistic Sensitivity in Deepfake Speech Detection

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work on spoofing detection has largely overlooked text-level robustness, focusing instead on acoustic perturbations—leaving a critical gap in understanding how language variation affects anti-spoofing systems. Method: We propose transcript-based semantic-equivalent adversarial attacks, integrating feature attribution analysis, cross-detector robustness evaluation, and realistic fraud scenario replication. Contribution/Results: We systematically demonstrate—for the first time—that both linguistic complexity and audio embedding similarity jointly exacerbate model vulnerability. Critically, minimal yet semantically preserving textual modifications significantly degrade detection performance: attack success rates exceed 60% on open-source detectors, while commercial detector accuracy plummets from 100% to 32%; notably, our attacks successfully evade detection in a Brad Pitt voice impersonation fraud scenario. These findings expose a fundamental limitation of acoustic-only defense paradigms and underscore the urgent need for joint acoustic–linguistic modeling to enhance system robustness against real-world spoofing threats.

Technology Category

Application Category

📝 Abstract
Recent advances in text-to-speech technologies have enabled realistic voice generation, fueling audio-based deepfake attacks such as fraud and impersonation. While audio anti-spoofing systems are critical for detecting such threats, prior work has predominantly focused on acoustic-level perturbations, leaving the impact of linguistic variation largely unexplored. In this paper, we investigate the linguistic sensitivity of both open-source and commercial anti-spoofing detectors by introducing transcript-level adversarial attacks. Our extensive evaluation reveals that even minor linguistic perturbations can significantly degrade detection accuracy: attack success rates surpass 60% on several open-source detector-voice pairs, and notably one commercial detection accuracy drops from 100% on synthetic audio to just 32%. Through a comprehensive feature attribution analysis, we identify that both linguistic complexity and model-level audio embedding similarity contribute strongly to detector vulnerability. We further demonstrate the real-world risk via a case study replicating the Brad Pitt audio deepfake scam, using transcript adversarial attacks to completely bypass commercial detectors. These results highlight the need to move beyond purely acoustic defenses and account for linguistic variation in the design of robust anti-spoofing systems. All source code will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

Investigates linguistic sensitivity in deepfake speech detection systems
Examines impact of transcript-level adversarial attacks on detection accuracy
Highlights need for linguistic-aware defenses in anti-spoofing systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing transcript-level adversarial attacks
Analyzing linguistic complexity and embedding similarity
Bypassing detectors with minor linguistic perturbations
🔎 Similar Papers
No similar papers found.
B
Binh Nguyen
Independent Researcher
S
Shuji Shi
Indiana University
R
Ryan Ofman
Deep Media AI
Thai Le
Thai Le
Assistant Professor in Computer Science, Indiana University
Machine learning & AI