🤖 AI Summary
To address the challenge of adapting pre-trained speech foundation models to dynamic computational resources in edge and IoT environments, this paper proposes an input-driven lightweight layer-skipping mechanism. Without altering the pre-trained model architecture, our approach employs a lightweight selection network that jointly models input features to dynamically and individually determine which layers to execute during inference—enabling input-conditioned adaptive computation. Unlike existing layer-dropping methods, ours avoids architectural redesign and eliminates coarse-grained, stochastic skipping. Experiments on four public speech benchmarks demonstrate that our method significantly reduces computational load while maintaining or even surpassing the accuracy of baselines such as early exiting. These results validate both its efficiency and seamless compatibility with existing pre-trained models.
📝 Abstract
Curating foundation speech models for edge and IoT settings, where computational resources vary over time, requires dynamic architectures featuring adaptable reduction strategies. One emerging approach is layer dropping ($mathcal{LD}$) which skips fraction of the layers of a backbone network during inference to reduce the computational load. This allows transforming static models into dynamic ones. However, existing approaches exhibit limitations either in the mode of selecting layers or by significantly modifying the neural architecture. To this end, we propose input-driven $mathcal{LD}$ that employs the network's input features and a lightweight layer selecting network to determine the optimum combination of processing layers. Extensive experimentation on 4 speech and audio public benchmarks, using two different pre-trained foundation models, demonstrates the effectiveness of our approach, thoroughly outperforming random dropping and producing on-par (or better) results to early exit.