Can LLMs Support Medical Knowledge Imputation? An Evaluation-Based Perspective

📅 2025-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical knowledge graphs (KGs) exhibit substantial incompleteness and inconsistency in disease–treatment mappings (e.g., ICD/Mondo ↔ ATC), while large language models (LLMs), though promising for KG completion, suffer from factual inaccuracies, hallucinated associations, and cross- and intra-model instability. Method: This work introduces the first evaluation framework grounded in authoritative clinical guidelines (UpToDate, NCCN) as ground truth, systematically assessing zero-shot treatment-relation completion reliability across GPT-4, Claude, Med-PaLM, and other LLMs. Results: Empirical evaluation reveals a 38–62% conflict rate between LLM-generated recommendations and guideline standards, underscoring critical safety risks in direct deployment. The core contributions are: (1) a guideline-driven, clinically grounded evaluation framework; (2) empirical evidence of LLM unreliability in clinical mapping tasks; and (3) validation of a hybrid approach—integrating symbolic reasoning with human-in-the-loop verification—as a viable path toward trustworthy clinical KG completion.

Technology Category

Application Category

📝 Abstract
Medical knowledge graphs (KGs) are essential for clinical decision support and biomedical research, yet they often exhibit incompleteness due to knowledge gaps and structural limitations in medical coding systems. This issue is particularly evident in treatment mapping, where coding systems such as ICD, Mondo, and ATC lack comprehensive coverage, resulting in missing or inconsistent associations between diseases and their potential treatments. To address this issue, we have explored the use of Large Language Models (LLMs) for imputing missing treatment relationships. Although LLMs offer promising capabilities in knowledge augmentation, their application in medical knowledge imputation presents significant risks, including factual inaccuracies, hallucinated associations, and instability between and within LLMs. In this study, we systematically evaluate LLM-driven treatment mapping, assessing its reliability through benchmark comparisons. Our findings highlight critical limitations, including inconsistencies with established clinical guidelines and potential risks to patient safety. This study serves as a cautionary guide for researchers and practitioners, underscoring the importance of critical evaluation and hybrid approaches when leveraging LLMs to enhance treatment mappings on medical knowledge graphs.
Problem

Research questions and friction points this paper is trying to address.

Addressing incompleteness in medical knowledge graphs for treatment mapping.
Evaluating LLMs for imputing missing disease-treatment relationships.
Assessing risks of LLMs in medical knowledge augmentation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs impute missing treatment relationships
Evaluate LLM-driven treatment mapping reliability
Hybrid approaches enhance medical knowledge graphs
🔎 Similar Papers
No similar papers found.
X
Xinyu Yao
Heinz College of Information Systems and Public Policy, Carnegie Mellon University, Pittsburgh, PA
A
Aditya Sannabhadti
Heinz College of Information Systems and Public Policy, Carnegie Mellon University, Pittsburgh, PA
Holly Wiberg
Holly Wiberg
Carnegie Mellon University
Healthcare AnalyticsPersonalized MedicineOptimizationMachine Learning
K
Karmel S. Shehadeh
Daniel J. Epstein Department of Industrial and Systems Engineering, University of Southern California, Los Angeles, CA
R
Rema Padman
Heinz College of Information Systems and Public Policy, Carnegie Mellon University, Pittsburgh, PA