🤖 AI Summary
Code-switched (CS) speech translation faces dual challenges: complex semantic modeling and scarcity of annotated CS data. To address these, we propose a Mixture-of-Experts (MoE)-based speech projector that constructs language-specific semantic subspaces. Our method employs a multi-stage training paradigm: (1) pretraining on monolingual ASR and speech-to-text (ST) data; and (2) fine-tuning with language-specific loss, intra-group load-balancing loss, and transition loss—enabling fine-grained speech–text semantic alignment without requiring CS-labeled data. Experiments demonstrate significant improvements over strong baselines across multiple mainstream CS-ST benchmarks. The approach exhibits strong generalization capability and low dependency on labeled CS data, offering a novel, resource-efficient paradigm for cross-lingual speech translation in low-resource settings.
📝 Abstract
Code-switching (CS) speech translation (ST) refers to translating speech that alternates between two or more languages into a target language text, which poses significant challenges due to the complexity of semantic modeling and the scarcity of CS data. Previous studies tend to rely on the model itself to implicitly learn semantic modeling during training, and resort to inefficient and costly manual annotations for these two challenges. To mitigate these limitations, we propose enhancing Large Language Models (LLMs) with a Mixture of Experts (MoE) speech projector, where each expert specializes in the semantic subspace of a specific language, enabling fine-grained modeling of speech features. Additionally, we introduce a multi-stage training paradigm that utilizes readily available monolingual automatic speech recognition (ASR) and monolingual ST data, facilitating speech-text alignment and improving translation capabilities. During training, we leverage a combination of language-specific loss and intra-group load balancing loss to guide the MoE speech projector in efficiently allocating tokens to the appropriate experts, across expert groups and within each group, respectively. To bridge the data gap across different training stages and improve adaptation to the CS scenario, we further employ a transition loss, enabling smooth transitions of data between stages, to effectively address the scarcity of high-quality CS speech translation data. Extensive experiments on widely used datasets demonstrate the effectiveness and generality of our approach.