Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall Spaces

📅 2024-12-18
🏛️ arXiv.org
📈 Citations: 107
Influential: 28
📄 PDF
🤖 AI Summary
This study investigates whether multimodal large language models (MLLMs) possess human-like video-based spatial perception, memory, and reasoning capabilities. Method: We introduce VSI-Bench—the first video-driven Visual Spatial Intelligence benchmark—comprising over 5,000 spatial-relation question-answer pairs, enabling systematic evaluation of MLLMs’ ability to model spatiotemporal structure in video sequences. We further propose a cognitive map generation mechanism that explicitly encodes spatial relations into topological, reasoning-amenable representations, overcoming inherent limitations of chain- or tree-structured linguistic reasoning. Contribution/Results: We discover, for the first time, that MLLMs implicitly maintain local world models and exhibit emergent spatial awareness. Empirical evaluation on VSI-Bench reveals that MLLMs achieve sub-human yet competitive visual spatial intelligence. Cognitive map guidance improves spatial distance reasoning accuracy by an average of 12.7%, significantly outperforming pure language-based enhancement methods such as chain-of-thought (CoT).

Technology Category

Application Category

📝 Abstract
Humans possess the visual-spatial intelligence to remember spaces from sequential visual observations. However, can Multimodal Large Language Models (MLLMs) trained on million-scale video datasets also ``think in space'' from videos? We present a novel video-based visual-spatial intelligence benchmark (VSI-Bench) of over 5,000 question-answer pairs, and find that MLLMs exhibit competitive - though subhuman - visual-spatial intelligence. We probe models to express how they think in space both linguistically and visually and find that while spatial reasoning capabilities remain the primary bottleneck for MLLMs to reach higher benchmark performance, local world models and spatial awareness do emerge within these models. Notably, prevailing linguistic reasoning techniques (e.g., chain-of-thought, self-consistency, tree-of-thoughts) fail to improve performance, whereas explicitly generating cognitive maps during question-answering enhances MLLMs' spatial distance ability.
Problem

Research questions and friction points this paper is trying to address.

Assessing MLLMs' visual-spatial intelligence from videos
Developing a benchmark for video-based spatial reasoning
Improving MLLMs' spatial awareness via cognitive maps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed video-based visual-spatial intelligence benchmark
Explored local world models in MLLMs
Enhanced spatial ability via cognitive maps
🔎 Similar Papers
No similar papers found.