🤖 AI Summary
ASR acoustic model architecture design faces bottlenecks including heavy reliance on manual, experience-driven tuning and prohibitive computational costs of neural architecture search. To address these, this paper proposes the Dynamic Model Architecture Optimization (DMAO) framework, which introduces an in-training, “grow-and-prune” parameter reallocation mechanism. DMAO features: (i) dynamic sparsification guided by CTC loss; (ii) module importance estimation driven by gradient sensitivity; and (iii) differentiable architectural topology evolution—enabling structural self-adaptation with negligible additional computation. Evaluated on LibriSpeech, TED-LIUM-v2, and Switchboard, DMAO achieves up to a 6% relative WER reduction under identical training budgets. Crucially, its gains generalize robustly across diverse architectures, model scales, and datasets, demonstrating strong cross-domain adaptability without architectural constraints.
📝 Abstract
Architecture design is inherently complex. Existing approaches rely on either handcrafted rules, which demand extensive empirical expertise, or automated methods like neural architecture search, which are computationally intensive. In this paper, we introduce DMAO, an architecture optimization framework that employs a grow-and-drop strategy to automatically reallocate parameters during training. This reallocation shifts resources from less-utilized areas to those parts of the model where they are most beneficial. Notably, DMAO only introduces negligible training overhead at a given model complexity. We evaluate DMAO through experiments with CTC on LibriSpeech, TED-LIUM-v2 and Switchboard datasets. The results show that, using the same amount of training resources, our proposed DMAO consistently improves WER by up to 6% relatively across various architectures, model sizes, and datasets. Furthermore, we analyze the pattern of parameter redistribution and uncover insightful findings.