Zipper-LoRA: Dynamic Parameter Decoupling for Speech-LLM based Multilingual Speech Recognition

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the stability-plasticity dilemma in multilingual speech recognition caused by data imbalance: fully shared parameters subject low-resource languages to negative interference, while entirely independent parameters impede cross-lingual knowledge transfer. To resolve this, the authors propose Zipper-LoRA, a novel framework featuring rank-level dynamic decoupling that enables fine-grained parameter sharing and disentanglement within the LoRA subspace via lightweight language-conditioned routers. The approach incorporates Static, Hard, and Soft variants alongside a two-stage training strategy—including an Initial-B warm start—to share parameters when compatible and decouple them when conflicting. Experiments across 12 languages with mixed resource levels demonstrate that Zipper-LoRA significantly outperforms both fully shared and fully independent baselines, with especially pronounced gains under extremely low-resource conditions, and maintains robust performance across both chunked and non-chunked encoder configurations.

Technology Category

Application Category

📝 Abstract
Speech Large Language Models (Speech-LLMs) have emerged as a powerful approach for automatic speech recognition (ASR) by aligning speech encoders with large language models. However, adapting these systems to multilingual settings with imbalanced data distributions remains challenging. In such scenarios, a stability-plasticity dilemma often arises: fully shared Parameter-Efficient Fine-Tuning (PEFT) can cause negative inter-lingual interference for under-represented languages, while fully language-specific tuning limits the cross-lingual beneficial knowledge transfer needed for low-resource tasks. To address this, we propose Zipper-LoRA, a novel rank-level decoupling framework with three variants (Static, Hard, and Soft) that dynamically synthesizes LoRA updates from shared and language-specific subspaces. By using a lightweight language-conditioned router, Zipper-LoRA dynamically controls the contribution of each subspace at the LoRA rank level, enabling fine-grained sharing where languages are compatible and strict decoupling when conflicts occur. To further stabilize optimization under imbalanced data, we propose a two-stage training strategy with an Initial-B warm start that significantly accelerates convergence. Experiments on a 12-language mixed-resource setting show that Zipper-LoRA consistently outperforms both fully shared and independent baselines, particularly in extremely low-resource scenarios. Moreover, we demonstrate that these gains are robust across both chunked and non-chunked encoder configurations, confirming the framework's reliability for practical, large-scale multilingual ASR. Our code and data will be available at https://github.com/YuCeong-May/Zipper-LoRA for reproducibility.
Problem

Research questions and friction points this paper is trying to address.

multilingual speech recognition
data imbalance
stability-plasticity dilemma
negative interference
cross-lingual transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zipper-LoRA
dynamic parameter decoupling
multilingual speech recognition
language-conditioned routing
low-resource ASR
🔎 Similar Papers
No similar papers found.
Y
Yuxiang Mei
Shanghai Engineering Research Center of Intelligent Education and Bigdata, Shanghai Normal University, Shanghai, 200234, China
D
Delai Qiu
Unisound AI Technology Co., Ltd., Beijing, China
S
Shengping Liu
Unisound AI Technology Co., Ltd., Beijing, China
J
Jiaen Liang
Unisound AI Technology Co., Ltd., Beijing, China
Yanhua Long
Yanhua Long
Professor, Shanghai Normal University
Speech signal processing