π€ AI Summary
Existing open-vocabulary segmentation (OVS) methods rely on manually crafted text prompts or predefined category sets, limiting scalability. This paper proposes 3D-AVS, the first fully automatic OVS framework for 3D point clouds: it requires no human intervention, dynamically generates semantically coherent and lexically rich vocabularies at inference time, and achieves per-point accurate segmentation. Key contributions include: (1) the first end-to-end automatic vocabulary generation mechanism; (2) a sparse mask attention pooling (SMAP) module to enhance fine-grained recognition diversity; and (3) a label-free evaluation metric, TPSS. By fusing multimodal features (RGB images and LiDAR point clouds), 3D-AVS models cross-modal semantic similarity between text and point clouds. Experiments on nuScenes and ScanNet200 demonstrate substantial improvements over prompt-dependent OVS baselines, validating both the quality of automatically generated vocabularies and segmentation accuracy.
π Abstract
Open-Vocabulary Segmentation (OVS) methods offer promising capabilities in detecting unseen object categories, but the category must be known and needs to be provided by a human, either via a text prompt or pre-labeled datasets, thus limiting their scalability. We propose 3D-AVS, a method for Auto-Vocabulary Segmentation of 3D point clouds for which the vocabulary is unknown and auto-generated for each input at runtime, thus eliminating the human in the loop and typically providing a substantially larger vocabulary for richer annotations. 3D-AVS first recognizes semantic entities from image or point cloud data and then segments all points with the automatically generated vocabulary. Our method incorporates both image-based and point-based recognition, enhancing robustness under challenging lighting conditions where geometric information from LiDAR is especially valuable. Our point-based recognition features a Sparse Masked Attention Pooling (SMAP) module to enrich the diversity of recognized objects. To address the challenges of evaluating unknown vocabularies and avoid annotation biases from label synonyms, hierarchies, or semantic overlaps, we introduce the annotation-free Text-Point Semantic Similarity (TPSS) metric for assessing generated vocabulary quality. Our evaluations on nuScenes and ScanNet200 demonstrate 3D-AVS's ability to generate semantic classes with accurate point-wise segmentations. Codes will be released at https://github.com/ozzyou/3D-AVS