How Visual Representations Map to Language Feature Space in Multimodal LLMs

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the cross-modal alignment mechanism between visual and linguistic representations in vision-language models (VLMs), challenging the efficacy of prevailing linear adapter architectures for alignment. Adopting a frozen-LLM-and-ViT paradigm with only linear adapters fine-tuned, we introduce pretrained sparse autoencoders (SAEs) as *invariance probes*—a novel method to quantitatively characterize alignment dynamics. Our analysis reveals that visual representations do not align with the language space at the input layer; instead, alignment emerges progressively across transformer layers: ViT outputs exhibit fundamental misalignment with early LLM layers, while stable cross-modal alignment is achieved only in middle-to-late LLM layers. Crucially, SAE reconstruction error and sparsity evolution jointly form an interpretable, quantitative alignment trajectory. This work provides new empirical evidence for multimodal representation learning and establishes a principled, interpretable analytical framework for probing cross-modal alignment.

Technology Category

Application Category

📝 Abstract
Effective multimodal reasoning depends on the alignment of visual and linguistic representations, yet the mechanisms by which vision-language models (VLMs) achieve this alignment remain poorly understood. We introduce a methodological framework that deliberately maintains a frozen large language model (LLM) and a frozen vision transformer (ViT), connected solely by training a linear adapter during visual instruction tuning. This design is fundamental to our approach: by keeping the language model frozen, we ensure it maintains its original language representations without adaptation to visual data. Consequently, the linear adapter must map visual features directly into the LLM's existing representational space rather than allowing the language model to develop specialized visual understanding through fine-tuning. Our experimental design uniquely enables the use of pre-trained sparse autoencoders (SAEs) of the LLM as analytical probes. These SAEs remain perfectly aligned with the unchanged language model and serve as a snapshot of the learned language feature-representations. Through systematic analysis of SAE reconstruction error, sparsity patterns, and feature SAE descriptions, we reveal the layer-wise progression through which visual representations gradually align with language feature representations, converging in middle-to-later layers. This suggests a fundamental misalignment between ViT outputs and early LLM layers, raising important questions about whether current adapter-based architectures optimally facilitate cross-modal representation learning.
Problem

Research questions and friction points this paper is trying to address.

Understand alignment of visual and linguistic representations in VLMs
Map visual features to frozen LLM's language space via adapter
Analyze layer-wise visual-language alignment using sparse autoencoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frozen LLM and ViT with linear adapter
Pre-trained sparse autoencoders as probes
Layer-wise visual-language alignment analysis
🔎 Similar Papers
No similar papers found.