๐ค AI Summary
This study addresses a fundamental trade-off in medical vision-language models between factual accuracy and resistance to social pressures, such as the tendency to conform to user preferencesโan aspect inadequately captured by existing evaluation frameworks. The authors systematically evaluate six models across three medical visual question answering benchmarks and introduce three novel metrics: L-VASE (an enhanced measure of visual-answer semantic consistency), CCS (a conformity score calibrated by model confidence), and the Clinical Safety Index (CSI), which integrates factuality, autonomy, and confidence calibration. By combining vision-language model evaluation, logit-space analysis, and geometric mean fusion, they establish a unified safety assessment framework. Across 1,151 test cases, all evaluated 7โ8B parameter models achieved CSI scores below 0.35, indicating that current models fail to simultaneously meet the clinical requirements for both accuracy and robustness.
๐ Abstract
Vision-language models (VLMs) adapted to the medical domain have shown strong performance on visual question answering benchmarks, yet their robustness against two critical failure modes, hallucination and sycophancy, remains poorly understood, particularly in combination. We evaluate six VLMs (three general-purpose, three medical-specialist) on three medical VQA datasets and uncover a grounding-sycophancy tradeoff: models with the lowest hallucination propensity are the most sycophantic, while the most pressure-resistant model hallucinates more than all medical-specialist models. To characterize this tradeoff, we propose three metrics: L-VASE, a logit-space reformulation of VASE that avoids its double-normalization; CCS, a confidence-calibrated sycophancy score that penalizes high-confidence capitulation; and Clinical Safety Index (CSI), a unified safety index that combines grounding, autonomy, and calibration via a geometric mean. Across 1,151 test cases, no model achieves a CSI above 0.35, indicating that none of the evaluated 7-8B parameter VLMs is simultaneously well-grounded and robust to social pressure. Our findings suggest that joint evaluation of both properties is necessary before these models can be considered for clinical use. Code is available at https://github.com/UTSA-VIRLab/AgreeOrRight