🤖 AI Summary
This work addresses the critical issue of object hallucination in large vision-language models, which severely undermines their reliability, and proposes a novel intervention framework that operates in a single forward pass without requiring a reference model. By leveraging orthogonal subspace editing, the method decomposes hidden states into three orthogonal components—visual evidence, conflicting priors, and residual uncertainty—and selectively suppresses hallucination-inducing patterns. The approach provides a mathematical guarantee that modifications to the prior subspace do not interfere with visual evidence, thereby enabling efficient and evidence-consistent hallucination mitigation. Experimental results demonstrate state-of-the-art performance on the POPE and CHAIR benchmarks while preserving general capabilities on MME, significantly outperforming baselines such as contrastive decoding and static subspace editing.
📝 Abstract
Object hallucination in Large Vision-Language Models (LVLMs) significantly hinders their reliable deployment. Existing methods struggle to balance efficiency and accuracy: they often require expensive reference models and multiple forward passes, or apply static edits that risk suppressing genuine visual evidence. To address this, we introduce HulluEdit, a single-pass, reference-free intervention framework. Our core innovation is orthogonal subspace editing: we decompose the hidden states of the model into orthogonal subspaces - visual evidence, conflicting priors, and residual uncertainty - enabling selective suppression of hallucinatory patterns without interfering with visual grounding. This approach mathematically guarantees that edits applied to the prior subspace leave the visual component entirely unaffected. Extensive experiments show that HulluEdit achieves state-of-the-art hallucination reduction on benchmarks including POPE and CHAIR across diverse architectures, while preserving general capabilities on MME and maintaining efficient inference. Our method consistently outperforms contrastive decoding and static subspace editing baselines, offering a new pathway toward more trustworthy LVLMs.