🤖 AI Summary
Existing automated radiology report evaluation methods predominantly rely on surface-level textual similarity, lacking clinical interpretability and alignment with expert judgment. To address this, we propose ICARE—a framework leveraging dual large language model (LLM) agents engaged in reciprocal questioning to transform clinical quality assessment into dynamic multiple-choice question answering (MCQA). ICARE incorporates conversational cross-validation, ensuring each evaluation score is explicitly tied to a specific clinical question, thereby yielding interpretable precision and recall metrics. It achieves unprecedented transparency in the assessment process, enables attribution of error patterns, and guarantees result reproducibility. Evaluated on multicenter clinical data, ICARE demonstrates significantly higher agreement with radiologist judgments than conventional baselines—including BLEU, ROUGE, and BERTScore (p < 0.01)—and exhibits high sensitivity to critical clinical elements such as lesion localization and severity grading.
📝 Abstract
Radiological imaging is central to diagnosis, treatment planning, and clinical decision-making. Vision-language foundation models have spurred interest in automated radiology report generation (RRG), but safe deployment requires reliable clinical evaluation of generated reports. Existing metrics often rely on surface-level similarity or behave as black boxes, lacking interpretability. We introduce ICARE (Interpretable and Clinically-grounded Agent-based Report Evaluation), an interpretable evaluation framework leveraging large language model agents and dynamic multiple-choice question answering (MCQA). Two agents, each with either the ground-truth or generated report, generate clinically meaningful questions and quiz each other. Agreement on answers captures preservation and consistency of findings, serving as interpretable proxies for clinical precision and recall. By linking scores to question-answer pairs, ICARE enables transparent, and interpretable assessment. Clinician studies show ICARE aligns significantly more with expert judgment than prior metrics. Perturbation analyses confirm sensitivity to clinical content and reproducibility, while model comparisons reveal interpretable error patterns.