Information Extraction from Clinical Notes: Are We Ready to Switch to Large Language Models?

📅 2024-11-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical information extraction (IE), particularly named entity recognition (NER) and relation extraction (RE), remains challenging due to domain specificity, annotation scarcity, and cross-institutional heterogeneity. Method: We systematically evaluate open-source large language models (LLaMA-2/3) against BERT on clinical NER and RE. A new high-quality corpus—comprising 1,588 manually annotated clinical documents from four sources—is constructed; LLaMA is instruction-tuned, and a unified framework for entity and modifier annotation is proposed. We further develop Kiwi, a lightweight open-source toolkit for efficient inference. Contribution/Results: Our study provides the first empirical evidence that LLaMA-3-70B achieves 89.2% F1 on NER and 86.5% F1 on RE over the i2b2 test set—outperforming BERT by +7.0% and +4.0%, respectively—with marked advantages in low-resource and cross-domain settings. However, computational overhead increases substantially (throughput drops to 1/28; GPU memory demand rises). Key contributions include the first rigorous feasibility validation of LLaMA-family models for clinical IE, release of a benchmark clinical corpus, and the open-source Kiwi toolkit.

Technology Category

Application Category

📝 Abstract
Backgrounds: Information extraction (IE) is critical in clinical natural language processing (NLP). While large language models (LLMs) excel on generative tasks, their performance on extractive tasks remains debated. Methods: We investigated Named Entity Recognition (NER) and Relation Extraction (RE) using 1,588 clinical notes from four sources (UT Physicians, MTSamples, MIMIC-III, and i2b2). We developed an annotated corpus covering 4 clinical entities and 16 modifiers, and compared instruction-tuned LLaMA-2 and LLaMA-3 against BERT in terms of performance, generalizability, computational resources, and throughput to BERT. Results: LLaMA models outperformed BERT across datasets. With sufficient training data, LLaMA showed modest improvements (1% on NER, 1.5-3.7% on RE); improvements were larger with limited training data. On unseen i2b2 data, LLaMA-3-70B outperformed BERT by 7% (F1) on NER and 4% on RE. However, LLaMA models required more computing resources and ran up to 28 times slower. We implemented"Kiwi,"a clinical IE package featuring both models, available at https://kiwi.clinicalnlp.org/. Conclusion: This study is among the first to develop and evaluate a comprehensive clinical IE system using open-source LLMs. Results indicate that LLaMA models outperform BERT for clinical NER and RE but with higher computational costs and lower throughputs. These findings highlight that choosing between LLMs and traditional deep learning methods for clinical IE applications should remain task-specific, taking into account both performance metrics and practical considerations such as available computing resources and the intended use case scenarios.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Clinical Information Extraction
Comparative Performance Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Clinical Information Extraction
Kiwi Package
🔎 Similar Papers
No similar papers found.
Y
Yan Hu
McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, USA
X
X. Zuo
McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, USA
Y
Yujia Zhou
Department of Biomedical Informatics and Data Science, Yale School of Medicine, Yale University, New Haven, USA
Xueqing Peng
Xueqing Peng
Yale University
Jimin Huang
Jimin Huang
The Fin AI
computational finance
V
V. Keloth
Department of Biomedical Informatics and Data Science, Yale School of Medicine, Yale University, New Haven, USA
V
Vincent J. Zhang
Department of Biomedical Informatics and Data Science, Yale School of Medicine, Yale University, New Haven, USA
Ruey-Ling Weng
Ruey-Ling Weng
Yale University
BioinformaticsUser-Centered DesignHuman-Computer Interaction (HCI)
Qingyu Chen
Qingyu Chen
Biomedical Informatics & Data Science, Yale University; NCBI-NLM, National Institutes of Health
Text miningMachine learningData curationBioNLPMedical Imaging Analysis
Xiaoqian Jiang
Xiaoqian Jiang
McWilliams School of Biomedical Informatics, UTHealth
predictive modelinghealthcare privacy
K
Kirk E. Roberts
McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, USA
H
Hua Xu
Department of Biomedical Informatics and Data Science, Yale School of Medicine, Yale University, New Haven, USA