🤖 AI Summary
To address the heavy reliance on paired audio-text annotations in language-driven sound separation, this paper proposes a novel paradigm that requires no parallel training data. Methodologically, it leverages the CLAP model to align linguistic queries with mixed-audio semantics and introduces a retrieval-augmented training framework: an LLM automatically generates audio descriptions to construct an embedding cache, enabling modality alignment during both training and inference while preventing audio information leakage. Innovatively integrating contrastive learning with conditional retrieval encoding, the approach achieves significant improvements in generalization and robustness—without any audio-text alignment supervision. Experiments demonstrate state-of-the-art performance across multiple benchmarks, with enhanced stability and complete independence from annotated audio-text pairs.
📝 Abstract
Language-queried target sound extraction (TSE) aims to extract specific sounds from mixtures based on language queries. Traditional fully-supervised training schemes require extensively annotated parallel audio-text data, which are labor-intensive. We introduce a parallel-data-free training scheme, requiring only unlabelled audio clips for TSE model training by utilizing the contrastive language-audio pre-trained model (CLAP). In a vanilla parallel-data-free training stage, target audio is encoded using the pre-trained CLAP audio encoder to form a condition embedding, while during testing, user language queries are encoded by CLAP text encoder as the condition embedding. This vanilla approach assumes perfect alignment between text and audio embeddings, which is unrealistic. Two major challenges arise from training-testing mismatch: the persistent modality gap between text and audio and the risk of overfitting due to the exposure of rich acoustic details in target audio embedding during training. To address this, we propose a retrieval-augmented strategy. Specifically, we create an embedding cache using audio captions generated by a large language model (LLM). During training, target audio embeddings retrieve text embeddings from this cache to use as condition embeddings, ensuring consistent modalities between training and testing and eliminating information leakage. Extensive experiment results show that our retrieval-augmented approach achieves consistent and notable performance improvements over existing state-of-the-art with better generalizability.