🤖 AI Summary
This study addresses the challenges of spatial localization of anatomical structures in 3D medical images, which arise from variations in imaging modalities, slice orientations, coordinate systems, and linguistic descriptions. To this end, the authors introduce MIS-Ground, the first comprehensive benchmark specifically designed for evaluating spatial grounding in medical imaging, and propose MIS-SemSam—a lightweight, model-agnostic, inference-stage optimization method. MIS-SemSam enhances the spatial reasoning capabilities of vision-language models through a semantic sampling strategy and multimodal prompting, including labels, bounding boxes, and mask overlays. Experimental results demonstrate that MIS-SemSam improves the spatial localization accuracy of Qwen3-VL-32B by 13.06% on the MIS-Ground benchmark, substantially outperforming existing baseline approaches.
📝 Abstract
Vision language models (VLMs) have shown significant promise in visual grounding for images as well as videos. In medical imaging research, VLMs represent a bridge between object detection and segmentation, and report understanding and generation. However, spatial grounding of anatomical structures in the three-dimensional space of medical images poses many unique challenges. In this study, we examine image modalities, slice directions, and coordinate systems as differentiating factors for vision components of VLMs, and the use of anatomical, directional, and relational terminology as factors for the language components. We then demonstrate that visual and textual prompting systems such as labels, bounding boxes, and mask overlays have varying effects on the spatial grounding ability of VLMs. To enable measurement and reproducibility, we introduce \textbf{MIS-Ground}, a benchmark that comprehensively tests a VLM for vulnerabilities against specific modes of \textbf{M}edical \textbf{I}mage \textbf{S}patial \textbf{Ground}ing. We release MIS-Ground to the public at \href{https://anonymous.4open.science/r/mis-ground}{\texttt{anonymous.4open.science/r/mis-ground}}. In addition, we present \textbf{MIS-SemSam}, a low-cost, inference-time, and model-agnostic optimization of VLMs that improve their spatial grounding ability with the use of \textbf{Sem}antic \textbf{Sam}pling. We find that MIS-SemSam improves the accuracy of Qwen3-VL-32B on MIS-Ground by 13.06\%.