MedErr-CT: A Visual Question Answering Benchmark for Identifying and Correcting Errors in CT Reports

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current medical visual question answering (VQA) benchmarks lack clinical relevance and fail to assess the higher-order reasoning capabilities of multimodal large language models (MLLMs) in detecting and correcting errors in CT reports. To address this, we introduce MedErr-CT—the first fine-grained VQA benchmark specifically designed for CT report error correction—covering six clinically authentic error types and establishing a three-tiered task hierarchy: classification, detection, and correction. We propose a novel cross-modal (vision + text) error annotation framework, integrating 3D medical MLLMs, cross-modal alignment, and multi-level reasoning for joint image–report analysis. Comprehensive experiments on state-of-the-art 3D medical MLLMs reveal significant performance disparities across error categories, thereby filling a critical gap in clinical error-correction evaluation. MedErr-CT provides both a quantifiable assessment standard and high-quality data to enhance diagnostic accuracy and model reliability.

Technology Category

Application Category

📝 Abstract
Computed Tomography (CT) plays a crucial role in clinical diagnosis, but the growing demand for CT examinations has raised concerns about diagnostic errors. While Multimodal Large Language Models (MLLMs) demonstrate promising comprehension of medical knowledge, their tendency to produce inaccurate information highlights the need for rigorous validation. However, existing medical visual question answering (VQA) benchmarks primarily focus on simple visual recognition tasks, lacking clinical relevance and failing to assess expert-level knowledge. We introduce MedErr-CT, a novel benchmark for evaluating medical MLLMs' ability to identify and correct errors in CT reports through a VQA framework. The benchmark includes six error categories - four vision-centric errors (Omission, Insertion, Direction, Size) and two lexical error types (Unit, Typo) - and is organized into three task levels: classification, detection, and correction. Using this benchmark, we quantitatively assess the performance of state-of-the-art 3D medical MLLMs, revealing substantial variation in their capabilities across different error types. Our benchmark contributes to the development of more reliable and clinically applicable MLLMs, ultimately helping reduce diagnostic errors and improve accuracy in clinical practice. The code and datasets are available at https://github.com/babbu3682/MedErr-CT.
Problem

Research questions and friction points this paper is trying to address.

Identifying and correcting errors in CT reports
Assessing medical MLLMs' error detection capabilities
Evaluating clinical relevance of medical VQA benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MedErr-CT benchmark for CT report errors
Includes six error categories and three tasks
Evaluates 3D medical MLLMs for diagnostic accuracy
🔎 Similar Papers
No similar papers found.
Sunggu Kyung
Sunggu Kyung
Biomedical Engineering, Asan Medical Center
Machine LearningMedical ImagingMedical AI
H
Hyungbin Park
Department of Biomedical Engineering, University of Ulsan College of Medicine
J
Jinyoung Seo
Department of Biomedical Engineering, University of Ulsan College of Medicine
J
Jimin Sung
Department of Biomedical Engineering, University of Ulsan College of Medicine
J
Jihyun Kim
Department of Biomedical Engineering, University of Ulsan College of Medicine
D
Dongyeong Kim
Department of Biomedical Engineering, University of Ulsan College of Medicine
W
Wooyoung Jo
Department of Biomedical Engineering, University of Ulsan College of Medicine
Yoojin Nam
Yoojin Nam
Department of Radiology, Samsung Changwon Hospital
RadiologyMedical AI
Sangah Park
Sangah Park
Chosun University, School of Medicine
MedicineMedical AI
T
Taehee Kwon
Department of Biomedical Engineering, University of Ulsan College of Medicine
S
Sang Min Lee
Department of Biomedical Engineering, University of Ulsan College of Medicine
N
Namkug Kim
Department of Biomedical Engineering, University of Ulsan College of Medicine