FaithSCAN: Model-Driven Single-Pass Hallucination Detection for Faithful Visual Question Answering

๐Ÿ“… 2026-01-01
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the critical issue of hallucinated answers in visual question answering (VQA), where vision-language models often generate responses lacking grounding in visual evidence, thereby compromising reliability in safety-critical applications. To mitigate this, we propose FaithSCANโ€”a lightweight, single-pass inference framework that systematically integrates multidimensional internal signals, including token-level decoding uncertainty, intermediate visual representations, and cross-modal alignment features, for hallucination detection. Our key innovations include a branching evidence encoding scheme, an uncertainty-aware attention mechanism, and an extension of the LLM-as-a-Judge paradigm to VQA for annotation-free automatic supervision. Experiments demonstrate that FaithSCAN significantly outperforms existing methods across multiple VQA benchmarks, achieving state-of-the-art performance in both detection accuracy and computational efficiency, while also revealing architectural differences in hallucination generation mechanisms across models.

Technology Category

Application Category

๐Ÿ“ Abstract
Faithfulness hallucinations in VQA occur when vision-language models produce fluent yet visually ungrounded answers, severely undermining their reliability in safety-critical applications. Existing detection methods mainly fall into two categories: external verification approaches relying on auxiliary models or knowledge bases, and uncertainty-driven approaches using repeated sampling or uncertainty estimates. The former suffer from high computational overhead and are limited by external resource quality, while the latter capture only limited facets of model uncertainty and fail to sufficiently explore the rich internal signals associated with the diverse failure modes. Both paradigms thus have inherent limitations in efficiency, robustness, and detection performance. To address these challenges, we propose FaithSCAN: a lightweight network that detects hallucinations by exploiting rich internal signals of VLMs, including token-level decoding uncertainty, intermediate visual representations, and cross-modal alignment features. These signals are fused via branch-wise evidence encoding and uncertainty-aware attention. We also extend the LLM-as-a-Judge paradigm to VQA hallucination and propose a low-cost strategy to automatically generate model-dependent supervision signals, enabling supervised training without costly human labels while maintaining high detection accuracy. Experiments on multiple VQA benchmarks show that FaithSCAN significantly outperforms existing methods in both effectiveness and efficiency. In-depth analysis shows hallucinations arise from systematic internal state variations in visual perception, cross-modal reasoning, and language decoding. Different internal signals provide complementary diagnostic cues, and hallucination patterns vary across VLM architectures, offering new insights into the underlying causes of multimodal hallucinations.
Problem

Research questions and friction points this paper is trying to address.

visual question answering
hallucination detection
vision-language models
faithfulness
multimodal hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

hallucination detection
visual question answering
internal signal fusion
uncertainty-aware attention
LLM-as-a-Judge
๐Ÿ”Ž Similar Papers
No similar papers found.
C
Chaodong Tong
Institute of Information Engineering, Chinese Academy of Sciences (CAS) and School of Cyber Security, University of CAS, Beijing 100093, China
Q
Qi Zhang
China Industrial Control Systems Cyber Emergency Response Team, Beijing 100040, China
C
Chen Li
China Electronics Standardization Institute, Ministry of Industry and Information Technology of the Peopleโ€™s Republic of China, Beijing 100007, China
Lei Jiang
Lei Jiang
Technical Institute of Physics and Chemistry, Chinese Academy of Sciences
bio-inspired interfacial materials with superwettability
Y
Yanbing Liu
Institute of Information Engineering, Chinese Academy of Sciences (CAS) and School of Cyber Security, University of CAS, Beijing 100093, China