🤖 AI Summary
Adversarial attacks leveraging hybrid real-and-synthetic speech pose a novel threat to speaker verification, exposing the failure of conventional binary spoofing detection paradigms in mixed-utterance scenarios. Method: We introduce HSAD—the first benchmark dataset for hybrid speech anti-spoofing—and identify critical overgeneralization and miscalibration issues in existing models. To address these, we propose a data-level adaptation strategy and a fine-grained, multi-class evaluation framework that jointly leverages spectrogram-based encoding and self-supervised waveform representations (MIT-AST, Wav2Vec 2.0, HuBERT) for precise authenticity discrimination. Contribution/Results: Our approach achieves 97.3% accuracy and 98.9% F1-score on HSAD, demonstrating the efficacy of dataset-specific adaptation. This work establishes a new paradigm and foundational infrastructure for robust audio anti-fraud systems.
📝 Abstract
The rapid advancement of AI has enabled highly realistic speech synthesis and voice cloning, posing serious risks to voice authentication, smart assistants, and telecom security. While most prior work frames spoof detection as a binary task, real-world attacks often involve hybrid utterances that mix genuine and synthetic speech, making detection substantially more challenging. To address this gap, we introduce the Hybrid Spoofed Audio Dataset (HSAD), a benchmark containing 1,248 clean and 41,044 degraded utterances across four classes: human, cloned, zero-shot AI-generated, and hybrid audio. Each sample is annotated with spoofing method, speaker identity, and degradation metadata to enable fine-grained analysis. We evaluate six transformer-based models, including spectrogram encoders (MIT-AST, MattyB95-AST) and self-supervised waveform models (Wav2Vec2, HuBERT). Results reveal critical lessons: pretrained models overgeneralize and collapse under hybrid conditions; spoof-specific fine-tuning improves separability but struggles with unseen compositions; and dataset-specific adaptation on HSAD yields large performance gains (AST greater than 97 percent and F1 score is approximately 99 percent), though residual errors persist for complex hybrids. These findings demonstrate that fine-tuning alone is not sufficient-robust hybrid-aware benchmarks like HSAD are essential to expose calibration failures, model biases, and factors affecting spoof detection in adversarial environments. HSAD thus provides both a dataset and an analytic framework for building resilient and trustworthy voice authentication systems.