A Neural Representation Framework with LLM-Driven Spatial Reasoning for Open-Vocabulary 3D Visual Grounding

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to accurately interpret spatial relations in language queries (e.g., “book on the chair”) for 3D instance localization, primarily due to insufficient spatial reasoning capability bridging natural language and 3D scenes. To address this, we propose SpatialReasoner—a novel framework that explicitly integrates large language models (LLMs) into open-vocabulary 3D vision grounding for the first time. SpatialReasoner leverages LLMs to parse spatial semantics and generate structured spatial constraints, which are fused with CLIP-based feature distillation and SAM-guided neural radiance field (NeRF) modeling to construct a hierarchical feature field jointly encoding visual attributes and spatial logic. This enables hierarchical alignment between linguistic instructions and 3D geometry. Experiments demonstrate that SpatialReasoner significantly outperforms state-of-the-art methods across multiple benchmarks and serves as a plug-and-play module that consistently enhances spatial reasoning and localization accuracy of diverse neural scene representations.

Technology Category

Application Category

📝 Abstract
Open-vocabulary 3D visual grounding aims to localize target objects based on free-form language queries, which is crucial for embodied AI applications such as autonomous navigation, robotics, and augmented reality. Learning 3D language fields through neural representations enables accurate understanding of 3D scenes from limited viewpoints and facilitates the localization of target objects in complex environments. However, existing language field methods struggle to accurately localize instances using spatial relations in language queries, such as ``the book on the chair.'' This limitation mainly arises from inadequate reasoning about spatial relations in both language queries and 3D scenes. In this work, we propose SpatialReasoner, a novel neural representation-based framework with large language model (LLM)-driven spatial reasoning that constructs a visual properties-enhanced hierarchical feature field for open-vocabulary 3D visual grounding. To enable spatial reasoning in language queries, SpatialReasoner fine-tunes an LLM to capture spatial relations and explicitly infer instructions for the target, anchor, and spatial relation. To enable spatial reasoning in 3D scenes, SpatialReasoner incorporates visual properties (opacity and color) to construct a hierarchical feature field. This field represents language and instance features using distilled CLIP features and masks extracted via the Segment Anything Model (SAM). The field is then queried using the inferred instructions in a hierarchical manner to localize the target 3D instance based on the spatial relation in the language query. Extensive experiments show that our framework can be seamlessly integrated into different neural representations, outperforming baseline models in 3D visual grounding while empowering their spatial reasoning capability.
Problem

Research questions and friction points this paper is trying to address.

Localizing objects in 3D scenes using free-form language queries
Improving spatial reasoning in language queries and 3D scenes
Enhancing open-vocabulary 3D visual grounding with neural representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven spatial reasoning for 3D queries
Hierarchical feature field with visual properties
CLIP and SAM for language-instance feature fusion
🔎 Similar Papers
No similar papers found.