Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) exhibit severe deficiencies in spatial reasoning, and no fine-grained, disentangled benchmark exists to systematically evaluate this capability. Method: We introduce SpatialBench—the first multidimensional benchmark dedicated to spatial reasoning—comprising four core dimensions: spatial relations, orientation-based navigation, mental rotation, and spatial visualization. It evaluates 13 state-of-the-art VLMs on both synthetic and real-world images. Crucially, we disentangle spatial reasoning from object detection and semantic understanding, designing cognitively grounded task paradigms that support both zero-shot and fine-tuned evaluation. Contribution/Results: Experiments reveal that top-performing models achieve only ≈50% average accuracy—near chance level—confirming spatial reasoning as a fundamental bottleneck in modern VLMs. The benchmark’s code and dataset are publicly released and have been adopted as a de facto standard in the field.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have recently emerged as powerful tools, excelling in tasks that integrate visual and textual comprehension, such as image captioning, visual question answering, and image-text retrieval. However, existing benchmarks for VLMs include spatial components, which often fail to isolate spatial reasoning from related tasks such as object detection or semantic comprehension. In this paper, we address these deficiencies with a multi-faceted approach towards understanding spatial reasoning. Informed by the diverse and multi-dimensional nature of human spatial reasoning abilities, we present a detailed analysis that first delineates the core elements of spatial reasoning: spatial relations, orientation and navigation, mental rotation, and spatial visualization, and then assesses the performance of these models in both synthetic and real-world images, bridging controlled and naturalistic contexts. We analyze 13 state-of-the-art Vision-Language Models, uncovering pivotal insights into their spatial reasoning performance. Our results reveal profound shortcomings in current VLMs, with average accuracy across the 13 models approximating random chance, highlighting spatial reasoning as a persistent obstacle. This work not only exposes the pressing need to advance spatial reasoning within VLMs but also establishes a solid platform for future exploration. Code available on GitHub (https://github.com/stogiannidis/srbench) and dataset available on HuggingFace (https://huggingface.co/datasets/stogiannidis/srbench).
Problem

Research questions and friction points this paper is trying to address.

Assessing VLMs' spatial reasoning in synthetic and real-world images
Identifying core spatial reasoning elements lacking in current VLMs
Benchmarking 13 VLMs to expose critical spatial reasoning gaps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking VLMs on spatial reasoning tasks
Analyzing core spatial elements in synthetic and real images
Evaluating 13 models revealing significant spatial reasoning gaps
🔎 Similar Papers
No similar papers found.