🤖 AI Summary
Existing 3D object detection methods operate under a closed-set assumption, rendering them incapable of recognizing unseen categories or their fine-grained attributes. This work proposes OVODA, the first framework for open-vocabulary 3D joint detection of objects and attributes—including spatial relations and motion states. Methodologically, OVODA abandons anchor-based designs reliant on predefined box dimensions, instead integrating semantic bridging via foundation models, multimodal feature alignment, view-aware prompt tuning, horizontal flip augmentation, and attribute-decoupled detection. Crucially, it introduces a prompt concatenation mechanism that requires no prior knowledge of novel categories. Evaluated on nuScenes and Argoverse 2, OVODA significantly outperforms state-of-the-art methods. Furthermore, we release OVAD—the first open-vocabulary 3D benchmark with fine-grained attribute annotations—enabling systematic evaluation of open-world 3D perception.
📝 Abstract
3D object detection plays a crucial role in autonomous systems, yet existing methods are limited by closed-set assumptions and struggle to recognize novel objects and their attributes in real-world scenarios. We propose OVODA, a novel framework enabling both open-vocabulary 3D object and attribute detection with no need to know the novel class anchor size. OVODA uses foundation models to bridge the semantic gap between 3D features and texts while jointly detecting attributes, e.g., spatial relationships, motion states, etc. To facilitate such research direction, we propose OVAD, a new dataset that supplements existing 3D object detection benchmarks with comprehensive attribute annotations. OVODA incorporates several key innovations, including foundation model feature concatenation, prompt tuning strategies, and specialized techniques for attribute detection, including perspective-specified prompts and horizontal flip augmentation. Our results on both the nuScenes and Argoverse 2 datasets show that under the condition of no given anchor sizes of novel classes, OVODA outperforms the state-of-the-art methods in open-vocabulary 3D object detection while successfully recognizing object attributes. Our OVAD dataset is released here: https://doi.org/10.5281/zenodo.16904069 .