🤖 AI Summary
Large Vision-Language Models (LVLMs) commonly suffer from vision-text hallucinations—generating syntactically correct but image-inconsistent outputs. To address this, we propose a training-free, semantic-graph-guided decoding method. Its core innovation is the introduction of “map-level attention”: a cross-layer cross-attention mechanism that models long-range semantic dependencies among hidden states, coupled with dynamic fusion of global and local logits to refine the semantic graph during decoding. Crucially, our approach requires no parameter updates; it operates solely at inference time by restructuring attention patterns. Evaluated on POPE, MME, and MMHal-Bench, it achieves substantial improvements in factual consistency and overall performance, demonstrating the effectiveness of map-level decoding as a novel paradigm for enhancing vision-language alignment.
📝 Abstract
Large Vision-Language Models (LVLMs) have achieved impressive performance in multimodal tasks, but they still suffer from hallucinations, i.e., generating content that is grammatically accurate but inconsistent with visual inputs. In this work, we introduce a novel map-level perspective to mitigate hallucinations in LVLMs, interpreting the hidden states of the model as a 2D semantic map. We observe that factual information is widely distributed across this map, extending beyond the localized inter- or intra-layer regions targeted by most existing methods (e.g., contrastive decoding and layer-wise consistency). Building on this insight, we propose Map-Level Attention Processing (MAP), a training-free decoding method that effectively leverages factual information through attention-based map-level operations to improve factual consistency. Specifically, we employ Layer-Wise Criss-Cross Attention to progressively refine token representations at each decoding layer by aggregating tokens from both inter- and intra-layer dimensions. Additionally, a Global-Local Logit Fusion mechanism combines logits obtained before and after global attention to further refine predictions and improve accuracy. Our method consistently improves the truthfulness and performance of LVLMs across benchmarks, such as POPE, MME, and MMHal-Bench, demonstrating the potential of the map-level decoding strategy.