🤖 AI Summary
Existing 3D visual grounding methods focus predominantly on object-level localization, neglecting higher-order spatial semantics such as activity areas, free space, and object parts. Method: We introduce Anywhere3D-Bench—the first four-tier 3D visual grounding benchmark covering activity regions, free space, objects, and parts—and systematically define and evaluate spatial-level and part-level grounding tasks. Our approach integrates 3D point cloud/voxel representations, multimodal large language models (MLLMs), explicit spatial relation encoding, and a cross-tier evaluation protocol. Results: State-of-the-art models (e.g., o4-mini) achieve only 23.57% and 33.94% accuracy on spatial-level and part-level grounding, respectively—substantially below region/object-level performance—revealing fundamental limitations in spatial relational reasoning and fine-grained structural perception. Anywhere3D-Bench establishes a new diagnostic benchmark to advance 3D scene understanding toward broader spatial scopes and finer granularity.
📝 Abstract
3D visual grounding has made notable progress in localizing objects within complex 3D scenes. However, grounding referring expressions beyond objects in 3D scenes remains unexplored. In this paper, we introduce Anywhere3D-Bench, a holistic 3D visual grounding benchmark consisting of 2,632 referring expression-3D bounding box pairs spanning four different grounding levels: human-activity areas, unoccupied space beyond objects, objects in the scene, and fine-grained object parts. We assess a range of state-of-the-art 3D visual grounding methods alongside large language models (LLMs) and multimodal LLMs (MLLMs) on Anywhere3D-Bench. Experimental results reveal that space-level and part-level visual grounding pose the greatest challenges: space-level tasks require a more comprehensive spatial reasoning ability, for example, modeling distances and spatial relations within 3D space, while part-level tasks demand fine-grained perception of object composition. Even the best performance model, OpenAI o4-mini, achieves only 23.57% accuracy on space-level tasks and 33.94% on part-level tasks, significantly lower than its performance on area-level and object-level tasks. These findings underscore a critical gap in current models' capacity to understand and reason about 3D scene beyond object-level semantics.