Evaluating ChatGPT on Medical Information Extraction Tasks: Performance, Explainability and Beyond

📅 2026-01-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the practical capabilities and reliability of ChatGPT in medical information extraction tasks. Focusing on four task types—including named entity recognition and relation extraction—the work presents the first comprehensive analysis of ChatGPT’s behavior across five dimensions: performance, interpretability, confidence, faithfulness, and uncertainty, using six standard MedIE benchmark datasets. The results indicate that while ChatGPT underperforms compared to fine-tuned baseline models in overall accuracy, it generates high-quality explanations and demonstrates relatively high faithfulness. However, it exhibits pervasive overconfidence and substantial output uncertainty, which collectively limit its suitability for direct deployment in high-stakes clinical settings. This research establishes a multidimensional analytical framework for assessing the reliability of large language models in medical text processing.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) like ChatGPT have demonstrated amazing capabilities in comprehending user intents and generate reasonable and useful responses. Beside their ability to chat, their capabilities in various natural language processing (NLP) tasks are of interest to the research community. In this paper, we focus on assessing the overall ability of ChatGPT in 4 different medical information extraction (MedIE) tasks across 6 benchmark datasets. We present the systematically analysis by measuring ChatGPT's performance, explainability, confidence, faithfulness, and uncertainty. Our experiments reveal that: (a) ChatGPT's performance scores on MedIE tasks fall behind those of the fine-tuned baseline models. (b) ChatGPT can provide high-quality explanations for its decisions, however, ChatGPT is over-confident in its predcitions. (c) ChatGPT demonstrates a high level of faithfulness to the original text in the majority of cases. (d) The uncertainty in generation causes uncertainty in information extraction results, thus may hinder its applications in MedIE tasks.
Problem

Research questions and friction points this paper is trying to address.

Medical Information Extraction
ChatGPT
Large Language Models
Explainability
Uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Medical Information Extraction
Large Language Models
Explainability
Faithfulness
Uncertainty
🔎 Similar Papers
No similar papers found.