VocalBench-zh: Decomposing and Benchmarking the Speech Conversational Abilities in Mandarin Context

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current Chinese speech interaction models lack a systematic, end-to-end (S2S) evaluation benchmark, hindering rigorous capability assessment and fair cross-model comparison. To address this, we introduce VocalBench-zh—the first fine-grained, Chinese speech-to-speech dialogue benchmark—comprising 10 curated subsets, over 10,000 high-quality speech-semantic pairs, and covering 12 user-centric capabilities. We propose the first capability-decomposed S2S evaluation framework for Chinese, integrating multimodal LLM-driven speech understanding and generation, alongside a hybrid evaluation pipeline combining strict human annotation and automated metrics. Extensive evaluation across 14 state-of-the-art models reveals persistent bottlenecks in semantic consistency, instruction adherence, and robustness. All code and data are fully open-sourced, filling a critical gap in Chinese speech interaction benchmarks and advancing the development of next-generation speech foundation models.

Technology Category

Application Category

📝 Abstract
The development of multi-modal large language models (LLMs) leads to intelligent approaches capable of speech interactions. As one of the most widely spoken languages globally, Mandarin is supported by most models to enhance their applicability and reach. However, the scarcity of comprehensive speech-to-speech (S2S) benchmarks in Mandarin contexts impedes systematic evaluation for developers and hinders fair model comparison for users. In this work, we propose VocalBench-zh, an ability-level divided evaluation suite adapted to Mandarin context consisting of 10 well-crafted subsets and over 10K high-quality instances, covering 12 user-oriented characters. The evaluation experiment on 14 mainstream models reveals the common challenges for current routes, and highlights the need for new insights into next-generation speech interactive systems. The evaluation codes and datasets will be available at https://github.com/SJTU-OmniAgent/VocalBench-zh.
Problem

Research questions and friction points this paper is trying to address.

Addresses the lack of comprehensive Mandarin speech-to-speech evaluation benchmarks
Systematically assesses conversational abilities of multimodal language models in Chinese
Enables fair comparison of speech interaction systems through standardized testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed Mandarin speech conversational benchmark VocalBench-zh
Created 10 subsets with 10K instances covering 12 characters
Evaluated 14 models revealing current system challenges
🔎 Similar Papers
No similar papers found.
Heyang Liu
Heyang Liu
Shanghai Jiao Tong University
ASRMultimodal understanding
Ziyang Cheng
Ziyang Cheng
University of Electronic Science and Technology of China
Y
Yuhao Wang
Shanghai Jiao Tong University
H
Hongcheng Liu
Shanghai Jiao Tong University
Yiqi Li
Yiqi Li
Shanghai Jiao Tong University
R
Ronghua Wu
Ant Group
Q
Qunshan Gu
Ant Group
Yanfeng Wang
Yanfeng Wang
Shanghai Jiao Tong University
Y
Yu Wang
Shanghai Jiao Tong University