🤖 AI Summary
This study systematically evaluates the reliability, diagnostic accuracy, and efficiency trade-offs of vision-language foundation models in clinical neuroimaging reasoning, focusing on multiple sclerosis, stroke, and brain tumors. We introduce the first multidimensional benchmark tailored for 2D MRI/CT, requiring models to jointly predict diagnosis, disease subtype, imaging modality, sequence type, and anatomical plane. To mitigate selection bias, we propose a discriminative classification framework with abstention, structured output validation, and a joint efficiency–performance evaluation protocol. Through a multi-stage assessment pipeline and zero-/few-shot prompting strategies across 20 state-of-the-art multimodal models, our experiments reveal that imaging attribute recognition is nearing saturation, whereas subtype diagnosis remains challenging. Among evaluated models, Gemini-2.5-Pro and GPT-5-Chat achieve the highest diagnostic performance, Gemini-2.5-Flash demonstrates superior efficiency, and the open-source MedGemma-1.5-4B excels under few-shot conditions with perfect structured output generation.
📝 Abstract
Recent advances in multimodal large language models enable new possibilities for image-based decision support. However, their reliability and operational trade-offs in neuroimaging remain insufficiently understood. We present a comprehensive benchmarking study of vision-enabled large language models for 2D neuroimaging using curated MRI and CT datasets covering multiple sclerosis, stroke, brain tumors, other abnormalities, and normal controls. Models are required to generate multiple outputs simultaneously, including diagnosis, diagnosis subtype, imaging modality, specialized sequence, and anatomical plane. Performance is evaluated across four directions: discriminative classification with abstention, calibration, structured-output validity, and computational efficiency. A multi-phase framework ensures fair comparison while controlling for selection bias. Across twenty frontier multimodal models, the results show that technical imaging attributes such as modality and plane are nearly solved, whereas diagnostic reasoning, especially subtype prediction, remains challenging. Tumor classification emerges as the most reliable task, stroke is moderately solvable, while multiple sclerosis and rare abnormalities remain difficult. Few-shot prompting improves performance for several models but increases token usage, latency, and cost. Gemini-2.5-Pro and GPT-5-Chat achieve the strongest overall diagnostic performance, while Gemini-2.5-Flash offers the best efficiency-performance trade-off. Among open-weight architectures, MedGemma-1.5-4B demonstrates the most promising results, as under few-shot prompting, it approaches the zero-shot performance of several proprietary models, while maintaining perfect structured output. These findings provide practical insights into performance, reliability, and efficiency trade-offs, supporting standardized evaluation of multimodal LLMs in neuroimaging.