🤖 AI Summary
This work addresses the continual learning (CL) challenge in multilingual, multi-domain automatic speech recognition (ASR), where models must adapt incrementally to new languages and domains under realistic, non-stationary, and non-uniform distribution shifts. To this end, we propose LIDIL—the first language-and-domain joint incremental learning paradigm for CL-ASR—explicitly modeling such dynamic shifts. We introduce the first large-scale, real-world CL-ASR benchmark, comprising 3,250 hours of spoken audio (including 1,720 newly collected hours) spanning 22 languages and 208 regional dialects, all with human-verified transcriptions. Comprehensive evaluation demonstrates that existing CL methods suffer substantial and unstable performance degradation on this benchmark. Our key contributions are: (1) the formalization of the LIDIL task setting; (2) a high-quality, broadly representative open-source dataset; and (3) an empirical robustness analysis of CL-ASR approaches, establishing a critical foundation and actionable directions for future research.
📝 Abstract
We introduce Nirantar, a comprehensive framework for evaluating continual learning (CL) in multilingual and multi-domain ASR. Designed to reflect real-world CL challenges, Nirantar leverages data collected incrementally across 22 languages and 208 districts in India through natural episodes. This enables evaluation across Language-Incremental (LIL), Domain-Incremental (DIL), and the novel Language-Incremental Domain-Incremental Learning (LIDIL) scenarios. Unlike prior work that relies on simulated episodes, Nirantar presents dynamic, non-uniform language and domain shifts, making it an ideal testbed for CL research. With 3250 hours of human-transcribed speech, including 1720 hours newly introduced in this work, our framework enables systematic benchmarking of CL methods. We evaluate existing approaches and demonstrate that no single method performs consistently well, underscoring the need for more robust CL strategies.