Attention Reallocation: Towards Zero-cost and Controllable Hallucination Mitigation of MLLMs

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from pervasive hallucination, and existing training-free mitigation methods often incur substantial inference overhead. This paper proposes AttnReal—a training-agnostic, zero-overhead attention reallocation mechanism that operates during decoding: it dynamically recovers attention excessively concentrated on output tokens and redistributes it toward visual tokens, thereby strengthening visual input constraints on generation. AttnReal supports continuous adjustment of intervention intensity, enabling controllable trade-offs between faithfulness and generation quality, and is fully compatible with standard decoding strategies—including greedy search, beam search, and nucleus sampling. We systematically evaluate AttnReal across six open-source MLLMs and three decoding paradigms. Results demonstrate significant improvements in response faithfulness without degrading generation quality or introducing any latency overhead.

Technology Category

Application Category

📝 Abstract
Multi-Modal Large Language Models (MLLMs) stand out in various tasks but still struggle with hallucinations. While recent training-free mitigation methods mostly introduce additional inference overhead via retrospection strategy and contrastive decoding, we propose attention reallocation (AttnReal) to mitigate hallucinations with nearly zero extra cost. Our approach is motivated by the key observations that, MLLM's unreasonable attention distribution causes features to be dominated by historical output tokens, which further contributes to hallucinated responses because of the distribution gap between different token types. Based on the observations, AttnReal recycles excessive attention from output tokens and reallocates it to visual tokens, which reduces MLLM's reliance on language priors and ensures the decoding process depends more on the visual inputs. More interestingly, we find that, by controlling the intensity of AttnReal, we can achieve a wide-range trade-off between the response faithfulness and overall performance. Comprehensive results from different benchmarks validate the effectiveness of AttnReal across six open-source MLLMs and three decoding strategies.
Problem

Research questions and friction points this paper is trying to address.

Mitigate hallucinations in Multi-Modal Large Language Models (MLLMs).
Reduce inference overhead with zero-cost attention reallocation (AttnReal).
Control trade-off between response faithfulness and overall performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention reallocation reduces hallucination costs.
Recycles attention from output to visual tokens.
Controls trade-off between faithfulness and performance.
Chongjun Tu
Chongjun Tu
fudan university
neural architecture searchdataset pruningMLLM inference acceleration
P
Peng Ye
The Chinese University of Hong Kong, Shanghai Artificial Intelligence Laboratory
Dongzhan Zhou
Dongzhan Zhou
Researcher at Shanghai AI Lab
AI4Sciencecomputer visiondeep learning
Lei Bai
Lei Bai
Shanghai AI Laboratory
Foundation ModelScience IntelligenceMulti-Agent SystemAutonomous Discovery
G
Gang Yu
StepFun
T
Tao Chen
Fudan University
W
Wanli Ouyang
The Chinese University of Hong Kong, Shanghai Artificial Intelligence Laboratory