๐ค AI Summary
Existing 3D multimodal large language models (MLLMs) heavily rely on scene compression or object segmentation, limiting their capacity to faithfully model the geometric and topological structure of 3D space, thereby impairing spatial perception. To address this, we propose a progressive spatial awareness mechanism that constructs position-sensitive and distance-aware 3D scene embeddings via hierarchical encoding, explicitly modeling inter-object depth, metric distance, and spatial layout relationships. We introduce two novel tasksโ3D object distance measurement and layout editingโto foster fine-grained understanding of spatial semantics. Furthermore, we integrate a large language model with a dedicated 3D spatial encoder within a unified architecture, leveraging vision-guided prompting and a self-constructed 3D instruction-tuning dataset. Experiments demonstrate state-of-the-art performance across multiple 3D vision-language benchmarks, with substantial gains in reasoning about complex spatial relations.
๐ Abstract
New era has unlocked exciting possibilities for extending Large Language Models (LLMs) to tackle 3D vision-language tasks. However, most existing 3D multimodal LLMs (MLLMs) rely on compressing holistic 3D scene information or segmenting independent objects to perform these tasks, which limits their spatial awareness due to insufficient representation of the richness inherent in 3D scenes. To overcome these limitations, we propose Spatial 3D-LLM, a 3D MLLM specifically designed to enhance spatial awareness for 3D vision-language tasks by enriching the spatial embeddings of 3D scenes. Spatial 3D-LLM integrates an LLM backbone with a progressive spatial awareness scheme that progressively captures spatial information as the perception field expands, generating location-enriched 3D scene embeddings to serve as visual prompts. Furthermore, we introduce two novel tasks: 3D object distance measurement and 3D layout editing, and construct a 3D instruction dataset, MODEL, to evaluate the model's spatial awareness capabilities. Experimental results demonstrate that Spatial 3D-LLM achieves state-of-the-art performance across a wide range of 3D vision-language tasks, revealing the improvements stemmed from our progressive spatial awareness scheme of mining more profound spatial information. Our code is available at https://github.com/bjshuyuan/Spatial-3D-LLM.