Efficient Supernet Training with Orthogonal Softmax for Scalable ASR Model Compression

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of adaptively balancing model size and performance for automatic speech recognition (ASR) under diverse hardware constraints, this paper proposes a supernet-based framework for jointly training variable-size encoders. The core innovation is OrthoSoftmax—a novel orthogonalized softmax mechanism that enables efficient, fine-grained subnetwork identification and selection without the computational overhead of conventional neural architecture search (NAS). OrthoSoftmax supports FLOPs-aware, multi-criteria, and multi-granularity component-level dynamic configuration, while uncovering structural regularities in subnetwork selection. Evaluated on LibriSpeech and TED-LIUM-v2, models of various sizes derived from a single supernet training run achieve word error rates (WER) on par with—or even superior to—those of individually trained counterparts. This approach significantly improves training efficiency and deployment flexibility across heterogeneous hardware platforms.

Technology Category

Application Category

📝 Abstract
ASR systems are deployed across diverse environments, each with specific hardware constraints. We use supernet training to jointly train multiple encoders of varying sizes, enabling dynamic model size adjustment to fit hardware constraints without redundant training. Moreover, we introduce a novel method called OrthoSoftmax, which applies multiple orthogonal softmax functions to efficiently identify optimal subnets within the supernet, avoiding resource-intensive search. This approach also enables more flexible and precise subnet selection by allowing selection based on various criteria and levels of granularity. Our results with CTC on Librispeech and TED-LIUM-v2 show that FLOPs-aware component-wise selection achieves the best overall performance. With the same number of training updates from one single job, WERs for all model sizes are comparable to or slightly better than those of individually trained models. Furthermore, we analyze patterns in the selected components and reveal interesting insights.
Problem

Research questions and friction points this paper is trying to address.

Automatic Speech Recognition (ASR)
Model Scalability
Performance Adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonal Softmax
Super Network Training
Automatic Speech Recognition (ASR)
🔎 Similar Papers
No similar papers found.
J
Jingjing Xu
Machine Learning and Human Language Technology Group, RWTH Aachen University, Germany; AppTek GmbH, Germany
Eugen Beck
Eugen Beck
AppTek.ai
Machine LearningAutomated Speech Recognition
Z
Zijian Yang
Machine Learning and Human Language Technology Group, RWTH Aachen University, Germany; AppTek GmbH, Germany
R
Ralf Schluter
Machine Learning and Human Language Technology Group, RWTH Aachen University, Germany; AppTek GmbH, Germany