Self-Correcting Decoding with Generative Feedback for Mitigating Hallucinations in Large Vision-Language Models

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) frequently generate hallucinated text inconsistent with input images, hindering their practical deployment. To address this, we propose DeGF—a training-free self-correction decoding algorithm that, for the first time, integrates text-to-image diffusion models (e.g., SDXL) as inverse visual feedback sources into LVLM decoding, establishing a closed-loop “generate–feedback–refine” framework. DeGF performs visual consistency verification at both response-level and token-level granularities, combining contrastive decoding with reweighted sampling to achieve plug-and-play hallucination suppression. Crucially, DeGF requires no fine-tuning or architectural modification. Evaluated on six established hallucination benchmarks, it significantly outperforms state-of-the-art methods across all major hallucination categories—object, attribute, relational, and counting hallucinations—demonstrating broad and robust mitigation without compromising generation quality.

Technology Category

Application Category

📝 Abstract
While recent Large Vision-Language Models (LVLMs) have shown remarkable performance in multi-modal tasks, they are prone to generating hallucinatory text responses that do not align with the given visual input, which restricts their practical applicability in real-world scenarios. In this work, inspired by the observation that the text-to-image generation process is the inverse of image-conditioned response generation in LVLMs, we explore the potential of leveraging text-to-image generative models to assist in mitigating hallucinations in LVLMs. We discover that generative models can offer valuable self-feedback for mitigating hallucinations at both the response and token levels. Building on this insight, we introduce self-correcting Decoding with Generative Feedback (DeGF), a novel training-free algorithm that incorporates feedback from text-to-image generative models into the decoding process to effectively mitigate hallucinations in LVLMs. Specifically, DeGF generates an image from the initial response produced by LVLMs, which acts as an auxiliary visual reference and provides self-feedback to verify and correct the initial response through complementary or contrastive decoding. Extensive experimental results validate the effectiveness of our approach in mitigating diverse types of hallucinations, consistently surpassing state-of-the-art methods across six benchmarks. Code is available at https://github.com/zhangce01/DeGF.
Problem

Research questions and friction points this paper is trying to address.

Mitigate hallucinations in large vision-language models
Use text-to-image generative models for feedback
Introduce self-correcting decoding algorithm DeGF
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-correcting Decoding with Generative Feedback
Incorporates text-to-image generative models
Mitigates hallucinations in Large Vision-Language Models
🔎 Similar Papers
No similar papers found.