🤖 AI Summary
This work addresses the challenging task of speaker-unlabeled, time-boundary-free multi-speaker automatic speech recognition (MS-ASR). We propose an end-to-end ASR framework that innovatively integrates speaker embeddings and temporal boundary modeling into the Qwen2.5 large language model (LLM). To enhance speaker discrimination, speech separation awareness, and cross-lingual generalization, we introduce language-specific adapters and apply LoRA-based fine-tuning. Evaluated on the MLC-SLM Challenge dataset, our approach achieves tcpWERs of 23.56% on the development set and 18.08% on the test set—substantially outperforming the official baseline. This marks the first empirical validation of LLM-based architectures for unsupervised MS-ASR, demonstrating both effectiveness and scalability in jointly modeling speaker identity, segmentation, and multilingual recognition without explicit speaker diarization or alignment supervision.
📝 Abstract
We present the DKU system for Task 2 of the MLC-SLM Challenge, which aims to perform multi-speaker automatic speech recognition directly from raw audio without Oracle speaker labels or time boundaries. Our approach builds upon a diarization-aware framework integrating speaker embeddings and temporal utterance boundaries into a Qwen2.5-based large language model (LLM). Then, we enhance the system's multilingual performance by fine-tuning language-specific adapters and LoRA modules within the LLM decoder. Finally, our system achieves the tcpWER of 23.56% and 18.08% on the development and test sets of the MLC-SLM dataset, substantially outperforming the official baseline.