🤖 AI Summary
Current Chinese speech interaction models lack a systematic, end-to-end (S2S) evaluation benchmark, hindering rigorous capability assessment and fair cross-model comparison. To address this, we introduce VocalBench-zh—the first fine-grained, Chinese speech-to-speech dialogue benchmark—comprising 10 curated subsets, over 10,000 high-quality speech-semantic pairs, and covering 12 user-centric capabilities. We propose the first capability-decomposed S2S evaluation framework for Chinese, integrating multimodal LLM-driven speech understanding and generation, alongside a hybrid evaluation pipeline combining strict human annotation and automated metrics. Extensive evaluation across 14 state-of-the-art models reveals persistent bottlenecks in semantic consistency, instruction adherence, and robustness. All code and data are fully open-sourced, filling a critical gap in Chinese speech interaction benchmarks and advancing the development of next-generation speech foundation models.
📝 Abstract
The development of multi-modal large language models (LLMs) leads to intelligent approaches capable of speech interactions. As one of the most widely spoken languages globally, Mandarin is supported by most models to enhance their applicability and reach. However, the scarcity of comprehensive speech-to-speech (S2S) benchmarks in Mandarin contexts impedes systematic evaluation for developers and hinders fair model comparison for users. In this work, we propose VocalBench-zh, an ability-level divided evaluation suite adapted to Mandarin context consisting of 10 well-crafted subsets and over 10K high-quality instances, covering 12 user-oriented characters. The evaluation experiment on 14 mainstream models reveals the common challenges for current routes, and highlights the need for new insights into next-generation speech interactive systems. The evaluation codes and datasets will be available at https://github.com/SJTU-OmniAgent/VocalBench-zh.