🤖 AI Summary
This work proposes Z3D, a general-purpose zero-shot 3D visual grounding framework that operates solely on multi-view images without any 3D annotations or geometric supervision. By leveraging high-quality zero-shot 3D instance segmentation to generate region proposals and fusing multi-view imagery—optionally augmented with camera poses and depth maps—Z3D integrates prompt-driven vision-language model reasoning to significantly enhance semantic alignment. Notably, Z3D achieves high-precision natural language-guided 3D object localization without any 3D supervision, setting a new state of the art among zero-shot methods on the ScanRefer and Nr3D benchmarks.
📝 Abstract
3D visual grounding (3DVG) aims to localize objects in a 3D scene based on natural language queries. In this work, we explore zero-shot 3DVG from multi-view images alone, without requiring any geometric supervision or object priors. We introduce Z3D, a universal grounding pipeline that flexibly operates on multi-view images while optionally incorporating camera poses and depth maps. We identify key bottlenecks in prior zero-shot methods causing significant performance degradation and address them with (i) a state-of-the-art zero-shot 3D instance segmentation method to generate high-quality 3D bounding box proposals and (ii) advanced reasoning via prompt-based segmentation, which utilizes full capabilities of modern VLMs. Extensive experiments on the ScanRefer and Nr3D benchmarks demonstrate that our approach achieves state-of-the-art performance among zero-shot methods. Code is available at https://github.com/col14m/z3d .