CRoPS: A Training-Free Hallucination Mitigation Framework for Vision-Language Models

📅 2026-01-02
🏛️ Trans. Mach. Learn. Res.
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of vision-language models to hallucination during generation, which undermines their practical reliability. To mitigate this issue without requiring additional training, the authors propose a novel framework that constructs diverse hallucinatory variants by selectively removing critical textual tokens and then fuses multi-source hallucination signals through generalized contrastive decoding. This approach overcomes the limitations of existing methods that rely on overly narrow assumptions about hallucination origins. Extensive experiments demonstrate consistent performance gains across six benchmark datasets and three prominent vision-language models, achieving a 20% improvement in CHAI-R score and outperforming current state-of-the-art training-free hallucination suppression techniques.

Technology Category

Application Category

📝 Abstract
Despite the rapid success of Large Vision-Language Models (LVLMs), a persistent challenge is their tendency to generate hallucinated content, undermining reliability in real-world use. Existing training-free methods address hallucinations but face two limitations: (i) they rely on narrow assumptions about hallucination sources, and (ii) their effectiveness declines toward the end of generation, where hallucinations are most likely to occur. A common strategy is to build hallucinated models by completely or partially removing visual tokens and contrasting them with the original model. Yet, this alone proves insufficient, since visual information still propagates into generated text. Building on this insight, we propose a novel hallucinated model that captures hallucination effects by selectively removing key text tokens. We further introduce Generalized Contrastive Decoding, which integrates multiple hallucinated models to represent diverse hallucination sources. Together, these ideas form CRoPS, a training-free hallucination mitigation framework that improves CHAIR scores by 20% and achieves consistent gains across six benchmarks and three LVLM families, outperforming state-of-the-art training-free methods.
Problem

Research questions and friction points this paper is trying to address.

hallucination
vision-language models
reliability
training-free methods
contrastive decoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

hallucination mitigation
training-free
contrastive decoding
vision-language models
token removal
🔎 Similar Papers
No similar papers found.