A Dataset for Addressing Patient's Information Needs related to Clinical Course of Hospitalization

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing EHR question-answering (QA) research lacks evaluation benchmarks aligned with real-world clinical information needs of hospitalized patients. Method: We introduce ArchEHR-QA—the first patient-centered EHR QA dataset—comprising 134 real ICU and emergency department cases, each annotated with patient-formulated questions, clinician-provided answers, clinical notes, and fine-grained sentence-level relevance labels. We systematically characterize inpatient clinical information needs and propose a multidimensional evaluation framework jointly assessing factual accuracy and relevance. Contribution/Results: Leveraging Llama-4, Llama-3, and Mixtral, we benchmark three prompting strategies and find “answer-first” prompting optimal, with Llama-4 achieving the highest performance. Error analysis identifies missing critical evidence and hallucination as primary failure modes. ArchEHR-QA establishes a reproducible, patient-oriented benchmark for clinical QA research, enabling rigorous evaluation of model fidelity to real-world clinical decision-support requirements.

Technology Category

Application Category

📝 Abstract
Patients have distinct information needs about their hospitalization that can be addressed using clinical evidence from electronic health records (EHRs). While artificial intelligence (AI) systems show promise in meeting these needs, robust datasets are needed to evaluate the factual accuracy and relevance of AI-generated responses. To our knowledge, no existing dataset captures patient information needs in the context of their EHRs. We introduce ArchEHR-QA, an expert-annotated dataset based on real-world patient cases from intensive care unit and emergency department settings. The cases comprise questions posed by patients to public health forums, clinician-interpreted counterparts, relevant clinical note excerpts with sentence-level relevance annotations, and clinician-authored answers. To establish benchmarks for grounded EHR question answering (QA), we evaluated three open-weight large language models (LLMs)--Llama 4, Llama 3, and Mixtral--across three prompting strategies: generating (1) answers with citations to clinical note sentences, (2) answers before citations, and (3) answers from filtered citations. We assessed performance on two dimensions: Factuality (overlap between cited note sentences and ground truth) and Relevance (textual and semantic similarity between system and reference answers). The final dataset contains 134 patient cases. The answer-first prompting approach consistently performed best, with Llama 4 achieving the highest scores. Manual error analysis supported these findings and revealed common issues such as omitted key clinical evidence and contradictory or hallucinated content. Overall, ArchEHR-QA provides a strong benchmark for developing and evaluating patient-centered EHR QA systems, underscoring the need for further progress toward generating factual and relevant responses in clinical contexts.
Problem

Research questions and friction points this paper is trying to address.

Lack of dataset for patient information needs in EHRs
Need to evaluate AI-generated responses for accuracy and relevance
Developing benchmarks for grounded EHR question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expert-annotated dataset ArchEHR-QA for EHR QA
Evaluated LLMs with three prompting strategies
Assessed performance on Factuality and Relevance
🔎 Similar Papers
2024-05-27International Conference on Information and Knowledge ManagementCitations: 4
S
Sarvesh Soni
Division of Intramural Research, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
Dina Demner-Fushman
Dina Demner-Fushman
National Library of Medicine
Biomedical InformaticsInformation RetrievalNatural Language ProcessingQuestion AnsweringSummarization