Beyond Speaker Identity: Text Guided Target Speech Extraction

📅 2025-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address target speech extraction (TSE) in the absence of speaker priors (e.g., enrollment utterances or visual cues), this paper proposes StyleTSE—the first cross-modal TSE framework guided by natural language style descriptions (e.g., “young female, cheerful intonation”). Methodologically, it employs a dual-modality encoder to jointly model audio and textual semantics, builds upon an enhanced SepFormer architecture, and introduces contrastive learning–driven cross-modal alignment. Key contributions include: (1) the first integration of free-form text descriptions as primary guidance for TSE; (2) TextrolMix—the first benchmark dataset featuring fine-grained textual annotations for TSE; and (3) support for flexible few-shot and zero-enrollment scenarios. Experiments on TextrolMix demonstrate a 3.2 dB improvement in signal-to-interference ratio (SIR) over audio-only baselines, strongly validating the efficacy of linguistic cues under identity-ambiguous conditions.

Technology Category

Application Category

📝 Abstract
Target Speech Extraction (TSE) traditionally relies on explicit clues about the speaker's identity like enrollment audio, face images, or videos, which may not always be available. In this paper, we propose a text-guided TSE model StyleTSE that uses natural language descriptions of speaking style in addition to the audio clue to extract the desired speech from a given mixture. Our model integrates a speech separation network adapted from SepFormer with a bi-modality clue network that flexibly processes both audio and text clues. To train and evaluate our model, we introduce a new dataset TextrolMix with speech mixtures and natural language descriptions. Experimental results demonstrate that our method effectively separates speech based not only on who is speaking, but also on how they are speaking, enhancing TSE in scenarios where traditional audio clues are absent. Demos are at: https://mingyue66.github.io/TextrolMix/demo/
Problem

Research questions and friction points this paper is trying to address.

Speaker-independent
Speech Separation
Voice Extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

StyleTSE
Speaker Identification
Textual Style Clues
🔎 Similar Papers
No similar papers found.
M
Mingyue Huo
University of Illinois Urbana-Champaign
A
Abhinav Jain
Amazon Prime Video
C
Cong Phuoc Huynh
Amazon Prime Video
Fanjie Kong
Fanjie Kong
Duke University
Machine LearningComputer VisionFairnessMedical Image Analysis
P
Pichao Wang
Amazon Prime Video
Z
Zhu Liu
Amazon Prime Video
V
Vimal Bhat
Amazon Prime Video