🤖 AI Summary
Medical knowledge graphs (KGs) exhibit substantial incompleteness and inconsistency in disease–treatment mappings (e.g., ICD/Mondo ↔ ATC), while large language models (LLMs), though promising for KG completion, suffer from factual inaccuracies, hallucinated associations, and cross- and intra-model instability. Method: This work introduces the first evaluation framework grounded in authoritative clinical guidelines (UpToDate, NCCN) as ground truth, systematically assessing zero-shot treatment-relation completion reliability across GPT-4, Claude, Med-PaLM, and other LLMs. Results: Empirical evaluation reveals a 38–62% conflict rate between LLM-generated recommendations and guideline standards, underscoring critical safety risks in direct deployment. The core contributions are: (1) a guideline-driven, clinically grounded evaluation framework; (2) empirical evidence of LLM unreliability in clinical mapping tasks; and (3) validation of a hybrid approach—integrating symbolic reasoning with human-in-the-loop verification—as a viable path toward trustworthy clinical KG completion.
📝 Abstract
Medical knowledge graphs (KGs) are essential for clinical decision support and biomedical research, yet they often exhibit incompleteness due to knowledge gaps and structural limitations in medical coding systems. This issue is particularly evident in treatment mapping, where coding systems such as ICD, Mondo, and ATC lack comprehensive coverage, resulting in missing or inconsistent associations between diseases and their potential treatments. To address this issue, we have explored the use of Large Language Models (LLMs) for imputing missing treatment relationships. Although LLMs offer promising capabilities in knowledge augmentation, their application in medical knowledge imputation presents significant risks, including factual inaccuracies, hallucinated associations, and instability between and within LLMs. In this study, we systematically evaluate LLM-driven treatment mapping, assessing its reliability through benchmark comparisons. Our findings highlight critical limitations, including inconsistencies with established clinical guidelines and potential risks to patient safety. This study serves as a cautionary guide for researchers and practitioners, underscoring the importance of critical evaluation and hybrid approaches when leveraging LLMs to enhance treatment mappings on medical knowledge graphs.