🤖 AI Summary
To address the limitation of single-audio-encoder architectures in speech large language models (LLMs)—which struggle to simultaneously optimize semantic understanding tasks (e.g., automatic speech recognition, audio captioning) and acoustic modeling tasks (e.g., speaker count verification)—this paper proposes Prompt-aware Multi-encoder Mixing (PaM). PaM employs a prompt-driven gating mechanism to dynamically route input audio to the most task-appropriate encoder and introduces a task-aware feature fusion strategy—replacing naive concatenation or averaging—to integrate heterogeneous encoder outputs. Unlike prior approaches, PaM enables a single unified speech LLM to achieve state-of-the-art performance across diverse downstream tasks—including ASR, speaker count verification, and audio captioning—outperforming all single-encoder baselines and conventional feature fusion methods. This work marks the first demonstration of a unified architecture attaining optimal performance across such heterogeneous speech tasks, thereby overcoming the fundamental constraints of the traditional single-encoder paradigm.
📝 Abstract
Connecting audio encoders with large language models (LLMs) allows the LLM to perform various audio understanding tasks, such as automatic speech recognition (ASR) and audio captioning (AC). Most research focuses on training an adapter layer to generate a unified audio feature for the LLM. However, different tasks may require distinct features that emphasize either semantic or acoustic aspects, making task-specific audio features more desirable. In this paper, we propose Prompt-aware Mixture (PaM) to enhance the Speech LLM that uses multiple audio encoders. Our approach involves using different experts to extract different features based on the prompt that indicates different tasks. Experiments demonstrate that with PaM, only one Speech LLM surpasses the best performances achieved by all single-encoder Speech LLMs on ASR, Speaker Number Verification, and AC tasks. PaM also outperforms other feature fusion baselines, such as concatenation and averaging.