Can Sound Replace Vision in LLaVA With Token Substitution?

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) rely on text-aligned representations, limiting their capacity for fine-grained audio understanding. Method: This paper proposes a sound-native multimodal modeling paradigm—replacing visual tokens with audio tokens directly in architectures like LLaVA to enable audio–language fusion without textual bridging. We first identify a Pareto trade-off between cross-modal alignment strength and generation quality; then introduce WhisperCLIP, a fusion architecture integrating InfoNCE contrastive learning, MLP projection, token-level audio–vision substitution, and intermediate-layer feature fusion. We also construct AVE-2, a large-scale fine-grained audio-visual event dataset comprising 580K three-second clips. Results: Our approach improves audio–video retrieval Top-1 accuracy by 44 percentage points over prior methods. The code and the SoundCLIP model are publicly released.

Technology Category

Application Category

📝 Abstract
While multimodal systems have achieved impressive advances, they typically rely on text-aligned representations rather than directly integrating audio and visual inputs. This reliance can limit the use of acoustic information in tasks requiring nuanced audio understanding. In response, SoundCLIP explores direct audio-visual integration within multimodal large language models (MLLMs) by substituting CLIP's visual tokens with audio representations and selecting sound-relevant patch tokens in models such as LLaVA. We investigate two configurations: (1) projecting audio features into CLIP's visual manifold via a multilayer perceptron trained with InfoNCE on paired audio-video segments, and (2) preserving raw audio embeddings with minimal dimensional adjustments. Experiments with five state-of-the-art audio encoders reveal a fundamental trade-off. While audio-to-video retrieval performance increases dramatically (up to 44 percentage points in Top-1 accuracy) when audio is projected into CLIP's space, text generation quality declines. Encoders pre-trained with text supervision (CLAP, Whisper, ImageBind) maintain stronger generative capabilities than those focused primarily on audiovisual alignment (Wav2CLIP, AudioCLIP), highlighting the value of language exposure for generation tasks. We introduce WhisperCLIP, an architecture that fuses intermediate representations from Whisper, as well as AudioVisual Event Evaluation (AVE-2), a dataset of 580,147 three-second audiovisual clips with fine-grained alignment annotations. Our findings challenge the assumption that stronger cross-modal alignment necessarily benefits all multimodal tasks; instead, a Pareto frontier emerges wherein optimal performance depends on balancing retrieval accuracy with text generation quality. Codes and datasets: https://github.com/ali-vosoughi/SoundCLIP.
Problem

Research questions and friction points this paper is trying to address.

Exploring audio-visual integration in multimodal models via token substitution
Investigating trade-offs between audio retrieval and text generation quality
Challenging assumption that strong cross-modal alignment benefits all tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Substitutes CLIP visual tokens with audio representations
Projects audio features into CLIP's visual manifold
Introduces WhisperCLIP and AVE-2 dataset
🔎 Similar Papers
A
A. Vosoughi
Computer Science Department, University of Rochester, NY, USA
J
Jing Bi
Computer Science Department, University of Rochester, NY, USA
Pinxin Liu
Pinxin Liu
Univeristy of Rochester
Computer VisionNatural Language ProcessingData Mining
Y
Yunlong Tang
Computer Science Department, University of Rochester, NY, USA
Chenliang Xu
Chenliang Xu
Associate Professor of Computer Science, University of Rochester
Computer VisionMultimodal LearningVideo UnderstandingVision and Language