Improving VQA Reliability: A Dual-Assessment Approach with Self-Reflection and Cross-Model Verification

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) frequently generate high-confidence hallucinations in visual question answering (VQA), severely undermining answer reliability. To address this, we propose DAVR, a dual-path evaluation framework that jointly performs introspective response credibility assessment and cross-model unsupervised fact verification—first of its kind. DAVR introduces a dual-selector attention module built upon feature disentanglement and question-answer embedding alignment to jointly quantify internal VLM uncertainty. Additionally, it leverages an external reference model for cross-model factual cross-validation. Crucially, DAVR requires no additional annotations or fine-tuning. Evaluated on the ICCV-CLVL 2025 Reliable VQA Challenge, DAVR achieves first place, attaining Φ₁₀₀ = 39.64 (+8.2) and 100-AUC = 97.22—demonstrating substantial improvements in answer trustworthiness and confidence calibration.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) have demonstrated significant potential in Visual Question Answering (VQA). However, the susceptibility of VLMs to hallucinations can lead to overconfident yet incorrect answers, severely undermining answer reliability. To address this, we propose Dual-Assessment for VLM Reliability (DAVR), a novel framework that integrates Self-Reflection and Cross-Model Verification for comprehensive uncertainty estimation. The DAVR framework features a dual-pathway architecture: one pathway leverages dual selector modules to assess response reliability by fusing VLM latent features with QA embeddings, while the other deploys external reference models for factual cross-checking to mitigate hallucinations. Evaluated in the Reliable VQA Challenge at ICCV-CLVL 2025, DAVR achieves a leading $Φ_{100}$ score of 39.64 and a 100-AUC of 97.22, securing first place and demonstrating its effectiveness in enhancing the trustworthiness of VLM responses.
Problem

Research questions and friction points this paper is trying to address.

Enhances VQA reliability by reducing hallucination risks
Integrates self-reflection and cross-model verification for uncertainty estimation
Improves answer trustworthiness through dual-pathway assessment architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-pathway architecture with self-reflection and cross-model verification
Fuses VLM latent features with QA embeddings for reliability
Uses external reference models for factual cross-checking
🔎 Similar Papers
No similar papers found.
X
Xixian Wu
Bilibili Inc.
Y
Yang Ou
Bilibili Inc.
P
Pengchao Tian
Bilibili Inc.
Z
Zian Yang
Bilibili Inc.
Jielei Zhang
Jielei Zhang
bilibili
computer visioncomputer graphicsOCR
P
Peiyi Li
Bilibili Inc.
L
Longwen Gao
Bilibili Inc.