🤖 AI Summary
Existing multimodal large language models (MLLMs) rely on text-aligned representations, limiting their capacity for fine-grained audio understanding. Method: This paper proposes a sound-native multimodal modeling paradigm—replacing visual tokens with audio tokens directly in architectures like LLaVA to enable audio–language fusion without textual bridging. We first identify a Pareto trade-off between cross-modal alignment strength and generation quality; then introduce WhisperCLIP, a fusion architecture integrating InfoNCE contrastive learning, MLP projection, token-level audio–vision substitution, and intermediate-layer feature fusion. We also construct AVE-2, a large-scale fine-grained audio-visual event dataset comprising 580K three-second clips. Results: Our approach improves audio–video retrieval Top-1 accuracy by 44 percentage points over prior methods. The code and the SoundCLIP model are publicly released.
📝 Abstract
While multimodal systems have achieved impressive advances, they typically rely on text-aligned representations rather than directly integrating audio and visual inputs. This reliance can limit the use of acoustic information in tasks requiring nuanced audio understanding. In response, SoundCLIP explores direct audio-visual integration within multimodal large language models (MLLMs) by substituting CLIP's visual tokens with audio representations and selecting sound-relevant patch tokens in models such as LLaVA. We investigate two configurations: (1) projecting audio features into CLIP's visual manifold via a multilayer perceptron trained with InfoNCE on paired audio-video segments, and (2) preserving raw audio embeddings with minimal dimensional adjustments. Experiments with five state-of-the-art audio encoders reveal a fundamental trade-off. While audio-to-video retrieval performance increases dramatically (up to 44 percentage points in Top-1 accuracy) when audio is projected into CLIP's space, text generation quality declines. Encoders pre-trained with text supervision (CLAP, Whisper, ImageBind) maintain stronger generative capabilities than those focused primarily on audiovisual alignment (Wav2CLIP, AudioCLIP), highlighting the value of language exposure for generation tasks. We introduce WhisperCLIP, an architecture that fuses intermediate representations from Whisper, as well as AudioVisual Event Evaluation (AVE-2), a dataset of 580,147 three-second audiovisual clips with fine-grained alignment annotations. Our findings challenge the assumption that stronger cross-modal alignment necessarily benefits all multimodal tasks; instead, a Pareto frontier emerges wherein optimal performance depends on balancing retrieval accuracy with text generation quality. Codes and datasets: https://github.com/ali-vosoughi/SoundCLIP.