🤖 AI Summary
Vision-language models (VLMs) frequently suffer from visual hallucination—generating outputs inconsistent with image content—due to overreliance on linguistic priors. To address this, we propose the Activation-Localization-Adversarial Editing (ALEA) framework, the first method to localize hallucination-sensitive parameter clusters via differential hidden-state analysis. ALEA introduces a differentiable adversarial prefix optimization mechanism that explicitly attenuates reliance on language priors while amplifying the weight of visual evidence. The approach supports end-to-end training and requires only lightweight fine-tuning of localized parameter clusters. Evaluated across multiple VLM benchmarks, ALEA consistently reduces hallucination rates and improves visual grounding in both generative and discriminative tasks. Extensive experiments demonstrate its effectiveness and generalizability. The code is publicly released.
📝 Abstract
While Vision-Language Models (VLMs) have garnered increasing attention in the AI community due to their promising practical applications, they exhibit persistent hallucination issues, generating outputs misaligned with visual inputs. Recent studies attribute these hallucinations to VLMs' over-reliance on linguistic priors and insufficient visual feature integration, proposing heuristic decoding calibration strategies to mitigate them. However, the non-trainable nature of these strategies inherently limits their optimization potential. To this end, we propose an adversarial parametric editing framework for Hallucination mitigation in VLMs, which follows an extbf{A}ctivate- extbf{L}ocate- extbf{E}dit extbf{A}dversarially paradigm. Specifically, we first construct an activation dataset that comprises grounded responses (positive samples attentively anchored in visual features) and hallucinatory responses (negative samples reflecting LLM prior bias and internal knowledge artifacts). Next, we identify critical hallucination-prone parameter clusters by analyzing differential hidden states of response pairs. Then, these clusters are fine-tuned using prompts injected with adversarial tuned prefixes that are optimized to maximize visual neglect, thereby forcing the model to prioritize visual evidence over inherent parametric biases. Evaluations on both generative and discriminative VLM tasks demonstrate the significant effectiveness of ALEAHallu in alleviating hallucinations. Our code is available at https://github.com/hujiayu1223/ALEAHallu.