FD-Bench: A Full-Duplex Benchmarking Pipeline Designed for Full Duplex Spoken Dialogue Systems

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks lack quantitative evaluation of full-duplex speech dialogue systems (FDSDS) under realistic conditions such as user interruptions, response latency, and acoustic noise. Method: We introduce the first comprehensive benchmark for FDSDS, featuring an automated simulation pipeline that integrates large language models (LLMs), automatic speech recognition (ASR), and text-to-speech (TTS) to generate 40 hours of speech data—comprising 293 multi-turn dialogues and 1,200 controlled interruptions—with configurable interruption timing and realistic noise injection. Contribution/Results: We propose novel metrics—including interruption response latency, robustness decay rate, and interaction fluency—to enable fine-grained, quantitative assessment of interruption handling. Experiments reveal significant degradation in response accuracy of leading open-source FDSDS under high-interruption and noisy conditions, underscoring the benchmark’s critical role in advancing natural, robust full-duplex dialogue research.

Technology Category

Application Category

📝 Abstract
Full-duplex spoken dialogue systems (FDSDS) enable more natural human-machine interactions by allowing real-time user interruptions and backchanneling, compared to traditional SDS that rely on turn-taking. However, existing benchmarks lack metrics for FD scenes, e.g., evaluating model performance during user interruptions. In this paper, we present a comprehensive FD benchmarking pipeline utilizing LLMs, TTS, and ASR to address this gap. It assesses FDSDS's ability to handle user interruptions, manage delays, and maintain robustness in challenging scenarios with diverse novel metrics. We applied our benchmark to three open-source FDSDS (Moshi, Freeze-omni, and VITA-1.5) using over 40 hours of generated speech, with 293 simulated conversations and 1,200 interruptions. The results show that all models continue to face challenges, such as failing to respond to user interruptions, under frequent disruptions and noisy conditions. Demonstrations, data, and code will be released.
Problem

Research questions and friction points this paper is trying to address.

Lack of metrics for full-duplex dialogue system evaluation
Need to assess handling of user interruptions and delays
Evaluate robustness in noisy and high-interruption scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes LLMs, TTS, and ASR technologies
Assesses handling of user interruptions and delays
Evaluates robustness with diverse novel metrics
🔎 Similar Papers
No similar papers found.
Y
Yizhou Peng
Alibaba-NTU Global e-Sustainability CorpLab, Nanyang Technological University, Singapore
Y
Yi-Wen Chao
College of Computing and Data Science, Nanyang Technological University, Singapore
Dianwen Ng
Dianwen Ng
MiroMind, Alibaba-NTU Singapore Joint Research Institute
Artificial IntelligenceDeep LearningSpeech RecognitionSelf-supervised Learning
Yukun Ma
Yukun Ma
Alibaba Group
ASRSLU
C
Chongjia Ni
Alibaba, Alibaba Inc., Singapore
B
Bin Ma
Alibaba, Alibaba Inc., Singapore
E
Eng Siong Chng
College of Computing and Data Science, Nanyang Technological University, Singapore