Look Closer! An Adversarial Parametric Editing Framework for Hallucination Mitigation in VLMs

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) frequently suffer from visual hallucination—generating outputs inconsistent with image content—due to overreliance on linguistic priors. To address this, we propose the Activation-Localization-Adversarial Editing (ALEA) framework, the first method to localize hallucination-sensitive parameter clusters via differential hidden-state analysis. ALEA introduces a differentiable adversarial prefix optimization mechanism that explicitly attenuates reliance on language priors while amplifying the weight of visual evidence. The approach supports end-to-end training and requires only lightweight fine-tuning of localized parameter clusters. Evaluated across multiple VLM benchmarks, ALEA consistently reduces hallucination rates and improves visual grounding in both generative and discriminative tasks. Extensive experiments demonstrate its effectiveness and generalizability. The code is publicly released.

Technology Category

Application Category

📝 Abstract
While Vision-Language Models (VLMs) have garnered increasing attention in the AI community due to their promising practical applications, they exhibit persistent hallucination issues, generating outputs misaligned with visual inputs. Recent studies attribute these hallucinations to VLMs' over-reliance on linguistic priors and insufficient visual feature integration, proposing heuristic decoding calibration strategies to mitigate them. However, the non-trainable nature of these strategies inherently limits their optimization potential. To this end, we propose an adversarial parametric editing framework for Hallucination mitigation in VLMs, which follows an extbf{A}ctivate- extbf{L}ocate- extbf{E}dit extbf{A}dversarially paradigm. Specifically, we first construct an activation dataset that comprises grounded responses (positive samples attentively anchored in visual features) and hallucinatory responses (negative samples reflecting LLM prior bias and internal knowledge artifacts). Next, we identify critical hallucination-prone parameter clusters by analyzing differential hidden states of response pairs. Then, these clusters are fine-tuned using prompts injected with adversarial tuned prefixes that are optimized to maximize visual neglect, thereby forcing the model to prioritize visual evidence over inherent parametric biases. Evaluations on both generative and discriminative VLM tasks demonstrate the significant effectiveness of ALEAHallu in alleviating hallucinations. Our code is available at https://github.com/hujiayu1223/ALEAHallu.
Problem

Research questions and friction points this paper is trying to address.

Mitigates hallucination issues in Vision-Language Models
Addresses over-reliance on linguistic priors in VLMs
Improves visual feature integration to reduce misalignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial parametric editing framework for hallucination mitigation
Activate-Locate-Edit paradigm to identify and tune parameters
Fine-tune hallucination-prone clusters with adversarial prefixes
🔎 Similar Papers
No similar papers found.
J
Jiayu Hu
College of Computer Science, Chongqing University
B
Beibei Li
College of Computer Science, Chongqing University
J
Jiangwei Xia
College of Computer Science, Chongqing University
Yanjun Qin
Yanjun Qin
Tsinghua University
Traffic ForecastingTransportation mode recognition
B
Bing Ji
College of Computer Science, Chongqing University
Z
Zhongshi He
College of Computer Science, Chongqing University