🤖 AI Summary
This work addresses the scarcity of natural conversational speech data for Vietnamese automatic speech recognition (ASR), as existing datasets predominantly consist of formal read or news-style utterances. To bridge this gap, we introduce VietSuperSpeech—a dataset comprising 267.39 hours of spontaneous, real-world conversational audio drawn from informal contexts such as casual chats, vlogs, and overseas Vietnamese community interactions. Using the Zipformer-30M-RNNT-6000h model via Sherpa-ONNX, we generate pseudo-labels for the recordings, which are standardized to 16 kHz mono WAV format. After rigorous quality filtering, the data are split into training and test sets using a fixed random seed. The publicly released dataset includes 52,023 annotated utterances, totaling 13.8 million Vietnamese characters with full diacritic annotation, significantly enhancing ASR performance in conversational Vietnamese scenarios.
📝 Abstract
We introduce VietSuperSpeech, a large-scale Vietnamese automatic speech recognition (ASR) dataset of 52,023 audio-text pairs totaling 267.39 hours, with a distinctive focus on casual conversational speech. Unlike existing Vietnamese ASR corpora that predominantly feature read speech, news narration, or audiobook content, VietSuperSpeech is sourced from four publicly accessible YouTube channels spanning everyday conversation, personal vlogging, overseas Vietnamese community dialogue, and informal commentary - the very speech styles encountered in real-world chatbot, customer support, call center, and hotline deployments. All audio is standardized to 16 kHz mono PCM WAV and segmented into 3-30 second utterances. Transcriptions are generated via pseudo-labeling using the Zipformer-30M-RNNT-6000h model (Nguyen, 2025) deployed through Sherpa-ONNX, pre-trained on 6,000 hours of Vietnamese speech. After quality filtering, the dataset is split into 46,822 training samples (240.67 hours) and 5,201 development/test samples (26.72 hours) with a fixed random seed. The text averages 266 characters per utterance, totaling 13.8 million fully diacritically marked Vietnamese characters. We demonstrate that VietSuperSpeech fills a critical gap in the Vietnamese ASR ecosystem: while corpora such as VLSP2020, VIET_BUD500, VietSpeech, FLEURS, VietMed, Sub-GigaSpeech2-Vi, viVoice, and Sub-PhoAudioBook provide broad coverage of formal and read speech, none specifically targets the casual, spontaneous register indispensable for conversational AI applications. VietSuperSpeech is publicly released at https://huggingface.co/datasets/thanhnew2001/VietSuperSpeech.