🤖 AI Summary
Existing replay attack detection methods rely on single-channel recordings and exhibit poor generalization across unseen acoustic environments.
Method: We propose a multi-channel spatial cue–based detection enhancement framework. First, we construct an acoustic simulation system that incorporates empirically measured loudspeaker directivity patterns—novel in replay attack research—alongside room impulse responses, multi-channel convolution, and noise injection to generate high-fidelity multi-channel synthetic data for both reverberant and anechoic spoofing scenarios. Second, we design the M-ALRAD detector, which explicitly exploits spatial features from microphone arrays.
Contribution/Results: This work establishes the first physically interpretable, multi-channel simulation paradigm specifically tailored for replay attack detection. Experiments demonstrate that our framework significantly improves model robustness and generalization performance under unseen environmental conditions, outperforming prior single-channel approaches.
📝 Abstract
Replay speech attacks pose a significant threat to voice-controlled systems, especially in smart environments where voice assistants are widely deployed. While multi-channel audio offers spatial cues that can enhance replay detection robustness, existing datasets and methods predominantly rely on single-channel recordings. In this work, we introduce an acoustic simulation framework designed to simulate multi-channel replay speech configurations using publicly available resources. Our setup models both genuine and spoofed speech across varied environments, including realistic microphone and loudspeaker impulse responses, room acoustics, and noise conditions. The framework employs measured loudspeaker directionalities during the replay attack to improve the realism of the simulation. We define two spoofing settings, which simulate whether a reverberant or an anechoic speech is used in the replay scenario, and evaluate the impact of omnidirectional and diffuse noise on detection performance. Using the state-of-the-art M-ALRAD model for replay speech detection, we demonstrate that synthetic data can support the generalization capabilities of the detector across unseen enclosures.