🤖 AI Summary
This work addresses the challenge of enabling multimodal large language models (MLLMs) to perform 3D spatial understanding and reasoning solely from video inputs—without relying on explicit 3D representations such as point clouds or bird’s-eye view (BEV) maps. To this end, we propose the first video-driven 3D geometric prior encoder, which achieves end-to-end alignment between video inputs and 3D semantic–geometric representations without requiring 3D annotations or reconstruction supervision. Our method integrates 3D visual geometric encoding, spatiotemporal video modeling, joint MLLM fine-tuning, and geometry-aware visual token fusion. Evaluated on VSI-Bench, our 4B-parameter model outperforms Gemini-1.5-Pro and establishes new state-of-the-art performance across multiple 3D spatial reasoning tasks, significantly surpassing existing video-only approaches.
📝 Abstract
Previous research has investigated the application of Multimodal Large Language Models (MLLMs) in understanding 3D scenes by interpreting them as videos. These approaches generally depend on comprehensive 3D data inputs, such as point clouds or reconstructed Bird's-Eye View (BEV) maps. In our research, we advance this field by enhancing the capability of MLLMs to understand and reason in 3D spaces directly from video data, without the need for additional 3D input. We propose a novel and efficient method, the Video-3D Geometry Large Language Model (VG LLM). Our approach employs a 3D visual geometry encoder that extracts 3D prior information from video sequences. This information is integrated with visual tokens and fed into the MLLM. Extensive experiments have shown that our method has achieved substantial improvements in various tasks related to 3D scene understanding and spatial reasoning, all directly learned from video sources. Impressively, our 4B model, which does not rely on explicit 3D data inputs, achieves competitive results compared to existing state-of-the-art methods, and even surpasses the Gemini-1.5-Pro in the VSI-Bench evaluations.