Clinically Grounded Agent-based Report Evaluation: An Interpretable Metric for Radiology Report Generation

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing automated radiology report evaluation methods predominantly rely on surface-level textual similarity, lacking clinical interpretability and alignment with expert judgment. To address this, we propose ICARE—a framework leveraging dual large language model (LLM) agents engaged in reciprocal questioning to transform clinical quality assessment into dynamic multiple-choice question answering (MCQA). ICARE incorporates conversational cross-validation, ensuring each evaluation score is explicitly tied to a specific clinical question, thereby yielding interpretable precision and recall metrics. It achieves unprecedented transparency in the assessment process, enables attribution of error patterns, and guarantees result reproducibility. Evaluated on multicenter clinical data, ICARE demonstrates significantly higher agreement with radiologist judgments than conventional baselines—including BLEU, ROUGE, and BERTScore (p < 0.01)—and exhibits high sensitivity to critical clinical elements such as lesion localization and severity grading.

Technology Category

Application Category

📝 Abstract
Radiological imaging is central to diagnosis, treatment planning, and clinical decision-making. Vision-language foundation models have spurred interest in automated radiology report generation (RRG), but safe deployment requires reliable clinical evaluation of generated reports. Existing metrics often rely on surface-level similarity or behave as black boxes, lacking interpretability. We introduce ICARE (Interpretable and Clinically-grounded Agent-based Report Evaluation), an interpretable evaluation framework leveraging large language model agents and dynamic multiple-choice question answering (MCQA). Two agents, each with either the ground-truth or generated report, generate clinically meaningful questions and quiz each other. Agreement on answers captures preservation and consistency of findings, serving as interpretable proxies for clinical precision and recall. By linking scores to question-answer pairs, ICARE enables transparent, and interpretable assessment. Clinician studies show ICARE aligns significantly more with expert judgment than prior metrics. Perturbation analyses confirm sensitivity to clinical content and reproducibility, while model comparisons reveal interpretable error patterns.
Problem

Research questions and friction points this paper is trying to address.

Develop interpretable metric for radiology report evaluation
Address lack of clinical reliability in existing RRG metrics
Enable transparent assessment via dynamic question-answering agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agents for clinical report evaluation
Dynamic MCQA for interpretable assessment
Question-answer pairs as precision proxies
🔎 Similar Papers
No similar papers found.
R
Radhika Dua
Center for Data Science, New York University, 60 5th Ave, New York, 100190, NY, USA.; Department of Neurosurgery, NYU Langone Health, 450 First Avenue, New York City, 10019, NY, USA.
Y
Young Joon Kwon
Department of Radiology, NYU Langone Health, 450 First Avenue, New York City, 10019, NY, USA.
Siddhant Dogra
Siddhant Dogra
Radiology Resident, NYU
RadiologyNeuroimagingAI
Daniel Freedman
Daniel Freedman
Department of Radiology, NYU Langone Health, 450 First Avenue, New York City, 10019, NY, USA.
D
Diana Ruan
Department of Radiology, NYU Langone Health, 450 First Avenue, New York City, 10019, NY, USA.
M
Motaz Nashawaty
Department of Radiology, NYU Langone Health, 450 First Avenue, New York City, 10019, NY, USA.
D
Danielle Rigau
Department of Radiology, NYU Langone Health, 450 First Avenue, New York City, 10019, NY, USA.
D
Daniel Alexander Alber
Department of Neurosurgery, NYU Langone Health, 450 First Avenue, New York City, 10019, NY, USA.; NYU Grossman School of Medicine, NYU Langone Health, 450 First Avenue, New York City, 10019, NY, USA.
K
Kang Zhang
National Clinical Eye Research Center, Eye Hospital, Wenzhou Medical University, Wenzhou, 325000, China.; Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou, 325000, China.; Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, 999078, China.
Kyunghyun Cho
Kyunghyun Cho
New York University, Genentech
Machine LearningDeep Learning
Eric Karl Oermann
Eric Karl Oermann
New York University
Artificial IntelligenceHuman Intelligence