🤖 AI Summary
To address the scarcity of audio-visual speech recognition (AVSR) data for low-resource languages like Vietnamese, this paper proposes the first end-to-end automated multimodal audio-visual acquisition framework. It integrates face detection, lip landmark tracking, audio-video temporal alignment, self-supervised speech quality assessment, and controlled noise simulation to efficiently extract and robustly filter high-fidelity lip-motion–speech pairs from web videos. We introduce VieLip—the first large-scale Vietnamese AVSR benchmark dataset—and design a cross-modal synchronization optimization strategy coupled with noise-robust filtering. Experiments demonstrate that AVSR models trained on our data achieve a 42% relative reduction in word error rate (WER) over audio-only ASR under cocktail-party noise conditions, while maintaining comparable performance in clean environments. This work significantly advances modeling capability and practical applicability of AVSR for low-resource languages.
📝 Abstract
Audio-Visual Speech Recognition (AVSR) has gained significant attention recently due to its robustness against noise, which often challenges conventional speech recognition systems that rely solely on audio features. Despite this advantage, AVSR models remain limited by the scarcity of extensive datasets, especially for most languages beyond English. Automated data collection offers a promising solution. This work presents a practical approach to generate AVSR datasets from raw video, refining existing techniques for improved efficiency and accessibility. We demonstrate its broad applicability by developing a baseline AVSR model for Vietnamese. Experiments show the automatically collected dataset enables a strong baseline, achieving competitive performance with robust ASR in clean conditions and significantly outperforming them in noisy environments like cocktail parties. This efficient method provides a pathway to expand AVSR to more languages, particularly under-resourced ones.