🤖 AI Summary
To address the inefficient fusion of self-supervised learning (SSL) representations—such as those from wav2vec 2.0—and handcrafted spectral features (e.g., FBanks) in speech modeling, caused by conflicting gradient directions during joint optimization, this paper proposes a conditional-computation-based multi-view feature fusion framework. Our method introduces: (1) a gradient-aware gating mechanism that dynamically modulates gradient flow across heterogeneous feature sources; and (2) a multi-stage dropout strategy to mitigate update conflicts arising from disparate representation dynamics. Evaluated on the multilingual MUST-C speech translation benchmark, the approach significantly accelerates convergence compared to baseline SSL-only models, achieves performance on par with purely spectral-feature-based systems, and simultaneously enhances generalization and robustness to acoustic perturbations.
📝 Abstract
Recent advancements have highlighted the efficacy of self-supervised learning (SSL) features in various speech-related tasks, providing lightweight and versatile multi-view speech representations. However, our study reveals that while SSL features expedite model convergence, they conflict with traditional spectral features like FBanks in terms of update directions. In response, we propose a novel generalized feature fusion framework grounded in conditional computation, featuring a gradient-sensitive gating network and a multi-stage dropout strategy. This framework mitigates feature conflicts and bolsters model robustness to multi-view input features. By integrating SSL and spectral features, our approach accelerates convergence and maintains performance on par with spectral models across multiple speech translation tasks on the MUSTC dataset.