Unifying Diarization, Separation, and ASR with Multi-Speaker Encoder

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance limitations arising from disjoint modeling of speaker diarization (SD), speech separation (SS), and multi-speaker automatic speech recognition (ASR) in overlapping speech scenarios. We propose a Unified Multi-speaker Encoder (UME), built upon a shared speech foundation encoder, which employs Residual-Weighted State Encoding (RWSE) over multiple hidden layers to achieve bottom-up representational alignment across tasks. To our knowledge, UME is the first framework to jointly model SD, SS, and ASR within a single encoder, explicitly capturing their intrinsic interdependencies. Trained end-to-end on LibriMix, UME significantly enhances overlapping speech processing: it achieves diarization error rates (DER) of 1.37% on Libri2Mix and 2.29% on Libri3Mix—substantially outperforming task-specific baselines.

Technology Category

Application Category

📝 Abstract
This paper presents a unified multi-speaker encoder (UME), a novel architecture that jointly learns representations for speaker diarization (SD), speech separation (SS), and multi-speaker automatic speech recognition (ASR) tasks using a shared speech foundational encoder. We leverage the hidden representations from multiple layers of UME as a residual weighted-sum encoding (RWSE) to effectively use information from different semantic levels, contributing to bottom-up alignment between tasks. This joint training approach captures the inherent interdependencies among the tasks, enhancing overall performance on overlapping speech data. Our evaluations demonstrate that UME substantially improves over the single-task baselines dedicated to SD, SS, and multi-speaker ASR on LibriMix evaluation sets. Notably, for SD, UME outperforms the previous studies, achieving diarization error rates of 1.37% and 2.29% on Libri2Mix and Libri3Mix evaluation sets, respectively.
Problem

Research questions and friction points this paper is trying to address.

Unifying speaker diarization, separation, and ASR tasks
Handling overlapping speech through joint representation learning
Improving multi-speaker speech processing performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified multi-speaker encoder for joint tasks
Residual weighted-sum encoding from multiple layers
Joint training capturing task interdependencies
🔎 Similar Papers
No similar papers found.