🤖 AI Summary
Medical visual-language models (VLMs) frequently generate plausible yet clinically inconsistent chain-of-thought (CoT) explanations for chest X-ray question answering, undermining clinical trustworthiness. To address this, we propose the first clinical-scenario-oriented multimodal perturbation evaluation framework, which quantifies explanation faithfulness along three dimensions—clinical fidelity, causal attribution, and confidence calibration—via radiology-informed, controllable text and image perturbations. Our method innovatively integrates expert radiologist reading studies and Kendall correlation analysis, revealing for the first time a significant decoupling between answer accuracy and explanation quality. Evaluations across six VLMs demonstrate pervasive explanation unfaithfulness. Proprietary models substantially outperform open-source counterparts in causal attribution (25.0% vs. 1.4%) and clinical fidelity (36.1% vs. 31.7%), yet confidence calibration remains weak across all models.
📝 Abstract
Vision-language models (VLMs) often produce chain-of-thought (CoT) explanations that sound plausible yet fail to reflect the underlying decision process, undermining trust in high-stakes clinical use. Existing evaluations rarely catch this misalignment, prioritizing answer accuracy or adherence to formats. We present a clinically grounded framework for chest X-ray visual question answering (VQA) that probes CoT faithfulness via controlled text and image modifications across three axes: clinical fidelity, causal attribution, and confidence calibration. In a reader study (n=4), evaluator-radiologist correlations fall within the observed inter-radiologist range for all axes, with strong alignment for attribution (Kendall's $τ_b=0.670$), moderate alignment for fidelity ($τ_b=0.387$), and weak alignment for confidence tone ($τ_b=0.091$), which we report with caution. Benchmarking six VLMs shows that answer accuracy and explanation quality are decoupled, acknowledging injected cues does not ensure grounding, and text cues shift explanations more than visual cues. While some open-source models match final answer accuracy, proprietary models score higher on attribution (25.0% vs. 1.4%) and often on fidelity (36.1% vs. 31.7%), highlighting deployment risks and the need to evaluate beyond final answer accuracy.