🤖 AI Summary
To address the performance bottleneck in large-vocabulary continuous visual speech recognition (LVC-VSR) for Mandarin Chinese, this paper introduces CNVSRC, a benchmark encompassing two realistic scenarios: studio-recorded speech and web videos. We propose robust end-to-end baselines—CNN-LSTM and Transformer architectures—and present CN-CVS2-P1, a large-scale, publicly available dataset. Our method incorporates multi-source lip feature fusion, fine-grained preprocessing, robust data augmentation, and self-supervised pretraining. Experiments demonstrate substantial reductions in word error rate (WER) on both CNVSRC-Single and CNVSRC-Multi test sets, establishing new state-of-the-art results for Mandarin LVC-VSR. This work advances the practical deployment of lip-reading systems in complex, real-world Chinese linguistic contexts.
📝 Abstract
This paper presents the second Chinese Continuous Visual Speech Recognition Challenge (CNVSRC 2024), which builds on CNVSRC 2023 to advance research in Chinese Large Vocabulary Continuous Visual Speech Recognition (LVC-VSR). The challenge evaluates two test scenarios: reading in recording studios and Internet speech. CNVSRC 2024 uses the same datasets as its predecessor CNVSRC 2023, which involves CN-CVS for training and CNVSRC-Single/Multi for development and evaluation. However, CNVSRC 2024 introduced two key improvements: (1) a stronger baseline system, and (2) an additional dataset, CN-CVS2-P1, for open tracks to improve data volume and diversity. The new challenge has demonstrated several important innovations in data preprocessing, feature extraction, model design, and training strategies, further pushing the state-of-the-art in Chinese LVC-VSR. More details and resources are available at the official website.