From Objects to Anywhere: A Holistic Benchmark for Multi-level Visual Grounding in 3D Scenes

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D visual grounding methods focus predominantly on object-level localization, neglecting higher-order spatial semantics such as activity areas, free space, and object parts. Method: We introduce Anywhere3D-Bench—the first four-tier 3D visual grounding benchmark covering activity regions, free space, objects, and parts—and systematically define and evaluate spatial-level and part-level grounding tasks. Our approach integrates 3D point cloud/voxel representations, multimodal large language models (MLLMs), explicit spatial relation encoding, and a cross-tier evaluation protocol. Results: State-of-the-art models (e.g., o4-mini) achieve only 23.57% and 33.94% accuracy on spatial-level and part-level grounding, respectively—substantially below region/object-level performance—revealing fundamental limitations in spatial relational reasoning and fine-grained structural perception. Anywhere3D-Bench establishes a new diagnostic benchmark to advance 3D scene understanding toward broader spatial scopes and finer granularity.

Technology Category

Application Category

📝 Abstract
3D visual grounding has made notable progress in localizing objects within complex 3D scenes. However, grounding referring expressions beyond objects in 3D scenes remains unexplored. In this paper, we introduce Anywhere3D-Bench, a holistic 3D visual grounding benchmark consisting of 2,632 referring expression-3D bounding box pairs spanning four different grounding levels: human-activity areas, unoccupied space beyond objects, objects in the scene, and fine-grained object parts. We assess a range of state-of-the-art 3D visual grounding methods alongside large language models (LLMs) and multimodal LLMs (MLLMs) on Anywhere3D-Bench. Experimental results reveal that space-level and part-level visual grounding pose the greatest challenges: space-level tasks require a more comprehensive spatial reasoning ability, for example, modeling distances and spatial relations within 3D space, while part-level tasks demand fine-grained perception of object composition. Even the best performance model, OpenAI o4-mini, achieves only 23.57% accuracy on space-level tasks and 33.94% on part-level tasks, significantly lower than its performance on area-level and object-level tasks. These findings underscore a critical gap in current models' capacity to understand and reason about 3D scene beyond object-level semantics.
Problem

Research questions and friction points this paper is trying to address.

Extending 3D visual grounding beyond object localization
Benchmarking multi-level 3D spatial reasoning capabilities
Addressing challenges in space-level and part-level visual grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Holistic 3D visual grounding benchmark Anywhere3D-Bench
Evaluates multi-level grounding including space and parts
Reveals gaps in spatial and fine-grained reasoning
🔎 Similar Papers
No similar papers found.
T
Tianxu Wang
BIGAI
Z
Zhuofan Zhang
BIGAI, Tsinghua University
Z
Ziyu Zhu
BIGAI, Tsinghua University
Y
Yue Fan
BIGAI
J
Jing Xiong
BIGAI, Peking University
Pengxiang Li
Pengxiang Li
Beijing Institute of Technology
Multimodal AgentVision and Language3DVHyperbolic Learning
Xiaojian Ma
Xiaojian Ma
University of California, Los Angeles
Computer VisionMachine LearningGenerative ModelingReinforcement Learning
Q
Qing Li
BIGAI