🤖 AI Summary
This work addresses the performance limitations arising from disjoint modeling of speaker diarization (SD), speech separation (SS), and multi-speaker automatic speech recognition (ASR) in overlapping speech scenarios. We propose a Unified Multi-speaker Encoder (UME), built upon a shared speech foundation encoder, which employs Residual-Weighted State Encoding (RWSE) over multiple hidden layers to achieve bottom-up representational alignment across tasks. To our knowledge, UME is the first framework to jointly model SD, SS, and ASR within a single encoder, explicitly capturing their intrinsic interdependencies. Trained end-to-end on LibriMix, UME significantly enhances overlapping speech processing: it achieves diarization error rates (DER) of 1.37% on Libri2Mix and 2.29% on Libri3Mix—substantially outperforming task-specific baselines.
📝 Abstract
This paper presents a unified multi-speaker encoder (UME), a novel architecture that jointly learns representations for speaker diarization (SD), speech separation (SS), and multi-speaker automatic speech recognition (ASR) tasks using a shared speech foundational encoder. We leverage the hidden representations from multiple layers of UME as a residual weighted-sum encoding (RWSE) to effectively use information from different semantic levels, contributing to bottom-up alignment between tasks. This joint training approach captures the inherent interdependencies among the tasks, enhancing overall performance on overlapping speech data. Our evaluations demonstrate that UME substantially improves over the single-task baselines dedicated to SD, SS, and multi-speaker ASR on LibriMix evaluation sets. Notably, for SD, UME outperforms the previous studies, achieving diarization error rates of 1.37% and 2.29% on Libri2Mix and Libri3Mix evaluation sets, respectively.