🤖 AI Summary
To address catastrophic forgetting in continual learning for multilingual automatic speech recognition—particularly the severe performance degradation (WER increase of 14.2%) in previously learned languages caused by shared embedding layers in the decoder head—this paper proposes a “surgical” dynamic expansion mechanism for the embedding layer: each newly introduced language maintains its own dedicated word embedding subnetwork. We further design a task-aware beam search that triggers ASR self-correction upon language identification errors. Built upon the Whisper architecture, our method integrates dynamic embedding layer switching with few-shot (10 hours per language) continual fine-tuning. Evaluated on 10 unseen Common Voice languages, it reduces average WER on prior languages to 11.9% while preserving new-language accuracy. This work is the first to jointly decouple word embeddings and model task-level decoding decisions in continual multilingual ASR, significantly enhancing model scalability and stability.
📝 Abstract
Current Multilingual ASR models only support a fraction of the world's languages. Continual Learning (CL) aims to tackle this problem by adding new languages to pre-trained models while avoiding the loss of performance on existing languages, also known as Catastrophic Forgetting (CF). However, existing CL methods overlook the adaptation of the token embedding lookup table at the decoder, despite its significant contribution to CF. We propose Embedding Layer Surgery where separate copies of the token embeddings are created for each new languages, and one of the copies is selected to replace the old languages embeddings when transcribing the corresponding new language. Unfortunately, this approach means LID errors also cause incorrect ASR embedding selection. Our Task-wise Beam Search allows self-correction for such mistakes. By adapting Whisper to 10 hours of data for each of 10 unseen languages from Common Voice, results show that our method reduces the Average WER (AWER) of pre-trained languages from 14.2% to 11.9% compared with Experience Replay, without compromising the AWER of the unseen languages.