π€ AI Summary
This work proposes the first end-to-end automatic speech recognition (ASR) framework with explicit phoneme modeling tailored for Vietnamese, addressing the poor generalization on out-of-vocabulary words and sensitivity to training data bias. Leveraging the highly transparent grapheme-to-phoneme correspondence inherent in Vietnamese orthography, the approach explicitly incorporates phoneme-level representations into a Transformer-based architecture to enhance the modelβs capacity to capture pronunciation regularities. Experimental results on two public Vietnamese ASR datasets demonstrate significant performance improvements, particularly in out-of-vocabulary scenarios, where the model exhibits stronger generalization. Moreover, the proposed method effectively mitigates the adverse effects of training data bias, offering a novel paradigm for languages characterized by phonemic orthographies.
π Abstract
Vietnamese has a phonetic orthography, where each grapheme corresponds to at most one phoneme and vice versa. Exploiting this high grapheme-phoneme transparency, we propose ViSpeechFormer (\textbf{Vi}etnamese \textbf{Speech} Trans\textbf{Former}), a phoneme-based approach for Vietnamese Automatic Speech Recognition (ASR). To the best of our knowledge, this is the first Vietnamese ASR framework that explicitly models phonemic representations. Experiments on two publicly available Vietnamese ASR datasets show that ViSpeechFormer achieves strong performance, generalizes better to out-of-vocabulary words, and is less affected by training bias. This phoneme-based paradigm is also promising for other languages with phonetic orthographies. The code will be released upon acceptance of this paper.