When Fine-Tuning is Not Enough: Lessons from HSAD on Hybrid and Adversarial Audio Spoof Detection

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adversarial attacks leveraging hybrid real-and-synthetic speech pose a novel threat to speaker verification, exposing the failure of conventional binary spoofing detection paradigms in mixed-utterance scenarios. Method: We introduce HSAD—the first benchmark dataset for hybrid speech anti-spoofing—and identify critical overgeneralization and miscalibration issues in existing models. To address these, we propose a data-level adaptation strategy and a fine-grained, multi-class evaluation framework that jointly leverages spectrogram-based encoding and self-supervised waveform representations (MIT-AST, Wav2Vec 2.0, HuBERT) for precise authenticity discrimination. Contribution/Results: Our approach achieves 97.3% accuracy and 98.9% F1-score on HSAD, demonstrating the efficacy of dataset-specific adaptation. This work establishes a new paradigm and foundational infrastructure for robust audio anti-fraud systems.

Technology Category

Application Category

📝 Abstract
The rapid advancement of AI has enabled highly realistic speech synthesis and voice cloning, posing serious risks to voice authentication, smart assistants, and telecom security. While most prior work frames spoof detection as a binary task, real-world attacks often involve hybrid utterances that mix genuine and synthetic speech, making detection substantially more challenging. To address this gap, we introduce the Hybrid Spoofed Audio Dataset (HSAD), a benchmark containing 1,248 clean and 41,044 degraded utterances across four classes: human, cloned, zero-shot AI-generated, and hybrid audio. Each sample is annotated with spoofing method, speaker identity, and degradation metadata to enable fine-grained analysis. We evaluate six transformer-based models, including spectrogram encoders (MIT-AST, MattyB95-AST) and self-supervised waveform models (Wav2Vec2, HuBERT). Results reveal critical lessons: pretrained models overgeneralize and collapse under hybrid conditions; spoof-specific fine-tuning improves separability but struggles with unseen compositions; and dataset-specific adaptation on HSAD yields large performance gains (AST greater than 97 percent and F1 score is approximately 99 percent), though residual errors persist for complex hybrids. These findings demonstrate that fine-tuning alone is not sufficient-robust hybrid-aware benchmarks like HSAD are essential to expose calibration failures, model biases, and factors affecting spoof detection in adversarial environments. HSAD thus provides both a dataset and an analytic framework for building resilient and trustworthy voice authentication systems.
Problem

Research questions and friction points this paper is trying to address.

Detecting hybrid audio spoofing attacks mixing real and synthetic speech
Addressing model overgeneralization and failure in adversarial conditions
Improving spoof detection robustness beyond binary classification tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created HSAD benchmark dataset with hybrid audio classes
Evaluated transformer models including spectrogram and waveform encoders
Demonstrated dataset-specific adaptation outperforms standard fine-tuning
🔎 Similar Papers
B
Bin Hu
Department of Computer Science and Technology, Kean University, USA
K
Kunyang Huang
Department of Computer Science and Technology, Wenzhou-Kean University, China
Daehan Kwak
Daehan Kwak
Associate Professor, Computer Science, Kean University
Smart SystemsIntelligent SystemsAI and Machine LearningUbiquitous ComputingNetworking
M
Meng Xu
Department of Computer Science and Technology, Kean University, USA
K
Kuan Huang
Department of Computer Science and Technology, Kean University, USA