🤖 AI Summary
Current medical visual question answering (VQA) benchmarks lack clinical relevance and fail to assess the higher-order reasoning capabilities of multimodal large language models (MLLMs) in detecting and correcting errors in CT reports. To address this, we introduce MedErr-CT—the first fine-grained VQA benchmark specifically designed for CT report error correction—covering six clinically authentic error types and establishing a three-tiered task hierarchy: classification, detection, and correction. We propose a novel cross-modal (vision + text) error annotation framework, integrating 3D medical MLLMs, cross-modal alignment, and multi-level reasoning for joint image–report analysis. Comprehensive experiments on state-of-the-art 3D medical MLLMs reveal significant performance disparities across error categories, thereby filling a critical gap in clinical error-correction evaluation. MedErr-CT provides both a quantifiable assessment standard and high-quality data to enhance diagnostic accuracy and model reliability.
📝 Abstract
Computed Tomography (CT) plays a crucial role in clinical diagnosis, but the growing demand for CT examinations has raised concerns about diagnostic errors. While Multimodal Large Language Models (MLLMs) demonstrate promising comprehension of medical knowledge, their tendency to produce inaccurate information highlights the need for rigorous validation. However, existing medical visual question answering (VQA) benchmarks primarily focus on simple visual recognition tasks, lacking clinical relevance and failing to assess expert-level knowledge. We introduce MedErr-CT, a novel benchmark for evaluating medical MLLMs' ability to identify and correct errors in CT reports through a VQA framework. The benchmark includes six error categories - four vision-centric errors (Omission, Insertion, Direction, Size) and two lexical error types (Unit, Typo) - and is organized into three task levels: classification, detection, and correction. Using this benchmark, we quantitatively assess the performance of state-of-the-art 3D medical MLLMs, revealing substantial variation in their capabilities across different error types. Our benchmark contributes to the development of more reliable and clinically applicable MLLMs, ultimately helping reduce diagnostic errors and improve accuracy in clinical practice. The code and datasets are available at https://github.com/babbu3682/MedErr-CT.