Where Does Vision Meet Language? Understanding and Refining Visual Fusion in MLLMs via Contrastive Attention

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the unclear mechanisms of vision–language integration in current multimodal large language models (MLLMs). Through layer-wise masking analysis and attention evolution tracking, the study systematically reveals for the first time that cross-modal fusion predominantly occurs in specific layers and identifies a late-stage “retrospective” reactivation of visual signals. Building on these insights, the authors propose a training-free contrastive attention framework that guides the model to enhance meaningful cross-modal attention transfer. Extensive experiments across diverse mainstream MLLM architectures and multimodal benchmarks demonstrate the effectiveness of the proposed mechanism, yielding significant improvements in multimodal reasoning performance.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved remarkable progress in vision-language understanding, yet how they internally integrate visual and textual information remains poorly understood. To bridge this gap, we perform a systematic layer-wise masking analysis across multiple architectures, revealing how visual-text fusion evolves within MLLMs. The results show that fusion emerges at several specific layers rather than being uniformly distributed across the network, and certain models exhibit a late-stage"review"phenomenon where visual signals are reactivated before output generation. Besides, we further analyze layer-wise attention evolution and observe persistent high-attention noise on irrelevant regions, along with gradually increasing attention on text-aligned areas. Guided by these insights, we introduce a training-free contrastive attention framework that models the transformation between early fusion and final layers to highlight meaningful attention shifts. Extensive experiments across various MLLMs and benchmarks validate our analysis and demonstrate that the proposed approach improves multimodal reasoning performance. Code will be released.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Visual-Language Understanding
Visual Fusion
Attention Mechanism
Layer-wise Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive Attention
Visual-Language Fusion
Layer-wise Analysis
Multimodal Large Language Models
Attention Refinement
🔎 Similar Papers
No similar papers found.
Shezheng Song
Shezheng Song
NUDT
S
Shasha Li
National University of Defense Technology
J
Jie Yu
National University of Defense Technology