🤖 AI Summary
Existing benchmarks lack quantitative evaluation of full-duplex speech dialogue systems (FDSDS) under realistic conditions such as user interruptions, response latency, and acoustic noise.
Method: We introduce the first comprehensive benchmark for FDSDS, featuring an automated simulation pipeline that integrates large language models (LLMs), automatic speech recognition (ASR), and text-to-speech (TTS) to generate 40 hours of speech data—comprising 293 multi-turn dialogues and 1,200 controlled interruptions—with configurable interruption timing and realistic noise injection.
Contribution/Results: We propose novel metrics—including interruption response latency, robustness decay rate, and interaction fluency—to enable fine-grained, quantitative assessment of interruption handling. Experiments reveal significant degradation in response accuracy of leading open-source FDSDS under high-interruption and noisy conditions, underscoring the benchmark’s critical role in advancing natural, robust full-duplex dialogue research.
📝 Abstract
Full-duplex spoken dialogue systems (FDSDS) enable more natural human-machine interactions by allowing real-time user interruptions and backchanneling, compared to traditional SDS that rely on turn-taking. However, existing benchmarks lack metrics for FD scenes, e.g., evaluating model performance during user interruptions. In this paper, we present a comprehensive FD benchmarking pipeline utilizing LLMs, TTS, and ASR to address this gap. It assesses FDSDS's ability to handle user interruptions, manage delays, and maintain robustness in challenging scenarios with diverse novel metrics. We applied our benchmark to three open-source FDSDS (Moshi, Freeze-omni, and VITA-1.5) using over 40 hours of generated speech, with 293 simulated conversations and 1,200 interruptions. The results show that all models continue to face challenges, such as failing to respond to user interruptions, under frequent disruptions and noisy conditions. Demonstrations, data, and code will be released.