🤖 AI Summary
Existing Chinese-English code-switching ASR datasets suffer from limited scale, low spontaneity, and incomplete dialogue recordings with coarse-grained transcriptions, hindering robust modeling for real-world scenarios. To address this, we introduce CS-Dialogue—the first large-scale, spontaneous, bilingual conversational speech dataset—comprising 104 hours of audio from 200 native speakers, capturing natural continuous code-switching phenomena, and providing full-length recordings with word-level, bilingual-aligned transcripts. We systematically characterize the distributional patterns of code-switching in spontaneous dialogue and establish the first unified benchmark for conversational code-switching ASR evaluation. Extensive evaluation across Transformer, Conformer, and Branchformer architectures reveals significant performance bottlenecks of state-of-the-art ASR systems (e.g., Whisper) on code-switched speech. CS-Dialogue fills a critical gap in high-naturalness, large-scale, high-quality bilingual code-switching speech data, serving as foundational infrastructure for robust code-switching ASR research.
📝 Abstract
Code-switching (CS), the alternation between two or more languages within a single conversation, presents significant challenges for automatic speech recognition (ASR) systems. Existing Mandarin-English code-switching datasets often suffer from limitations in size, spontaneity, and the lack of full-length dialogue recordings with transcriptions, hindering the development of robust ASR models for real-world conversational scenarios. This paper introduces CS-Dialogue, a novel large-scale Mandarin-English code-switching speech dataset comprising 104 hours of spontaneous conversations from 200 speakers. Unlike previous datasets, CS-Dialogue provides full-length dialogue recordings with complete transcriptions, capturing naturalistic code-switching patterns in continuous speech. We describe the data collection and annotation processes, present detailed statistics of the dataset, and establish benchmark ASR performance using state-of-the-art models. Our experiments, using Transformer, Conformer, and Branchformer, demonstrate the challenges of code-switching ASR, and show that existing pre-trained models such as Whisper still have the space to improve. The CS-Dialogue dataset will be made freely available for all academic purposes.