🤖 AI Summary
To address target speech extraction (TSE) in the absence of speaker priors (e.g., enrollment utterances or visual cues), this paper proposes StyleTSE—the first cross-modal TSE framework guided by natural language style descriptions (e.g., “young female, cheerful intonation”). Methodologically, it employs a dual-modality encoder to jointly model audio and textual semantics, builds upon an enhanced SepFormer architecture, and introduces contrastive learning–driven cross-modal alignment. Key contributions include: (1) the first integration of free-form text descriptions as primary guidance for TSE; (2) TextrolMix—the first benchmark dataset featuring fine-grained textual annotations for TSE; and (3) support for flexible few-shot and zero-enrollment scenarios. Experiments on TextrolMix demonstrate a 3.2 dB improvement in signal-to-interference ratio (SIR) over audio-only baselines, strongly validating the efficacy of linguistic cues under identity-ambiguous conditions.
📝 Abstract
Target Speech Extraction (TSE) traditionally relies on explicit clues about the speaker's identity like enrollment audio, face images, or videos, which may not always be available. In this paper, we propose a text-guided TSE model StyleTSE that uses natural language descriptions of speaking style in addition to the audio clue to extract the desired speech from a given mixture. Our model integrates a speech separation network adapted from SepFormer with a bi-modality clue network that flexibly processes both audio and text clues. To train and evaluate our model, we introduce a new dataset TextrolMix with speech mixtures and natural language descriptions. Experimental results demonstrate that our method effectively separates speech based not only on who is speaking, but also on how they are speaking, enhancing TSE in scenarios where traditional audio clues are absent. Demos are at: https://mingyue66.github.io/TextrolMix/demo/