🤖 AI Summary
This study addresses the pervasive overconfidence of vision-language models in medical visual question answering (VQA), which undermines the reliability of clinical decision support. It presents the first systematic evaluation of confidence calibration performance among prominent models—including Qwen3-VL, InternVL3, and LLaVA-NeXT—on medical VQA tasks. To mitigate this issue, the authors propose Hallucination-Aware Calibration (HAC), a novel approach that integrates visual grounded hallucination detection signals with Platt scaling as a post-processing step. Experimental results demonstrate that HAC substantially reduces calibration error while simultaneously enhancing both calibration quality and discriminative capability, as measured by AUROC, particularly on open-ended questions—thereby achieving a favorable balance between reliability and accuracy.
📝 Abstract
As vision-language models (VLMs) are increasingly deployed in clinical decision support, more than accuracy is required: knowing when to trust their predictions is equally critical. Yet, a comprehensive and systematic investigation into the overconfidence of these models remains notably scarce in the medical domain. We address this gap through a comprehensive empirical study of confidence calibration in VLMs, spanning three model families (Qwen3-VL, InternVL3, LLaVA-NeXT), three model scales (2B--38B), and multiple confidence estimation prompting strategies, across three medical visual question answering (VQA) benchmarks. Our study yields three key findings: First, overconfidence persists across model families and is not resolved by scaling or prompting, such as chain-of-thought and verbalized confidence variants. Second, simple post-hoc calibration approaches, such as Platt scaling, reduce calibration error and consistently outperform the prompt-based strategy. Third, due to their (strict) monotonicity, these post-hoc calibration methods are inherently limited in improving the discriminative quality of predictions, leaving AUROC at the same level. Motivated by these findings, we investigate hallucination-aware calibration (HAC), which incorporates vision-grounded hallucination detection signals as complementary inputs to refine confidence estimates. We find that leveraging these hallucination signals improves both calibration and AUROC, with the largest gains on open-ended questions. Overall, our findings suggest post-hoc calibration as standard practice for medical VLM deployment over raw confidence estimates, and highlight the practical usefulness of hallucination signals to enable more reliable use of VLMs in medical VQA.