Seeing Right but Saying Wrong: Inter- and Intra-Layer Refinement in MLLMs without Training

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies and addresses a critical yet previously overlooked issue in multimodal large language models (MLLMs): inter-layer attention inconsistency, wherein models attend to the correct visual regions but still produce erroneous outputs—effectively “seeing right but saying wrong.” To resolve this without additional training, the authors propose DualPD, a training-free, dual-perspective decoding optimization strategy. DualPD dynamically suppresses low-contribution attention heads through a layer-wise attention-guided contrastive logits module and a head-level information filtering mechanism, thereby enhancing semantic representation in pivotal layers. Extensive experiments on LLaVA and Qwen-VL families across multiple multimodal benchmarks demonstrate significant accuracy improvements, confirming the method’s effectiveness and strong generalization capability.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated strong capabilities across a variety of vision-language tasks. However, their internal reasoning often exhibits a critical inconsistency: although deeper layers may attend to the correct visual regions, final predictions are frequently misled by noisy attention from earlier layers. This results in a disconnect between what the model internally understands and what it ultimately expresses, a phenomenon we describe as seeing it right but saying it wrong. To address this issue, we propose DualPD, a dual-perspective decoding refinement strategy that enhances the visual understanding without any additional training. DualPD consists of two components. (1) The layer-wise attention-guided contrastive logits module captures how the belief in the correct answer evolves by comparing output logits between layers that exhibit the largest attention shift. (2) The head-wise information filtering module suppresses low-contribution attention heads that focus on irrelevant regions, thereby improving attention quality within each layer. Experiments conducted on both the LLaVA and Qwen-VL model families across multiple multimodal benchmarks demonstrate that DualPD consistently improves accuracy without training, confirming its effectiveness and generalizability. The code will be released upon publication.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
attention inconsistency
visual reasoning
layer-wise attention
output disconnection
Innovation

Methods, ideas, or system contributions that make the work stand out.

DualPD
attention refinement
training-free
multimodal reasoning
layer-wise contrastive logits
🔎 Similar Papers
Shezheng Song
Shezheng Song
NUDT
S
Shasha Li
National University of Defense Technology
J
Jie Yu
National University of Defense Technology