Automated Evaluation can Distinguish the Good and Bad AI Responses to Patient Questions about Hospitalization

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scalability bottleneck in manual evaluation of AI systems’ inpatient question-answering (QA) quality, this study proposes a multidimensional automated evaluation framework anchored on reference answers authored by clinical physicians. Methodologically, the framework quantifies QA performance along three clinically grounded dimensions—answer completeness, utilization of electronic health record (EHR) evidence, and accuracy of general medical knowledge—using integrated natural language processing techniques. Evaluated across 28 AI systems and 2,800 responses, the automated scores achieve high concordance with expert ratings (Spearman’s ρ > 0.9), substantially outperforming conventional reference-free or unidimensional metrics. This work introduces the first high-fidelity, scalable QA evaluation framework specifically designed for the inpatient clinical setting, enabling rapid, evidence-based iteration and trustworthy deployment of clinical AI systems.

Technology Category

Application Category

📝 Abstract
Automated approaches to answer patient-posed health questions are rising, but selecting among systems requires reliable evaluation. The current gold standard for evaluating the free-text artificial intelligence (AI) responses--human expert review--is labor-intensive and slow, limiting scalability. Automated metrics are promising yet variably aligned with human judgments and often context-dependent. To address the feasibility of automating the evaluation of AI responses to hospitalization-related questions posed by patients, we conducted a large systematic study of evaluation approaches. Across 100 patient cases, we collected responses from 28 AI systems (2800 total) and assessed them along three dimensions: whether a system response (1) answers the question, (2) appropriately uses clinical note evidence, and (3) uses general medical knowledge. Using clinician-authored reference answers to anchor metrics, automated rankings closely matched expert ratings. Our findings suggest that carefully designed automated evaluation can scale comparative assessment of AI systems and support patient-clinician communication.
Problem

Research questions and friction points this paper is trying to address.

Automating evaluation of AI health responses
Addressing scalability limitations of expert reviews
Assessing answer quality and evidence usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated evaluation matches expert clinician ratings
Uses clinician-authored reference answers as metrics
Assesses AI responses across three clinical dimensions
🔎 Similar Papers
No similar papers found.
S
Sarvesh Soni
Division of Intramural Research, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
Dina Demner-Fushman
Dina Demner-Fushman
National Library of Medicine
Biomedical InformaticsInformation RetrievalNatural Language ProcessingQuestion AnsweringSummarization