OTAS: Open-vocabulary Token Alignment for Outdoor Segmentation

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address semantic ambiguity, ill-defined category boundaries, and the failure of object-centric priors in outdoor open-vocabulary segmentation, this paper proposes a semantic modeling approach grounded in pre-trained vision model tokens. Specifically, it directly extracts semantic structure from visual tokens of multi-view images, then constructs a geometrically consistent 3D feature field via cross-view semantic clustering and language–vision token alignment—enabling zero-shot, fine-tuning-free language-grounded segmentation. By abandoning conventional object-centric assumptions, the method ensures cross-view semantic consistency. Experimental results demonstrate a 151% improvement in 3D segmentation IoU on TartanAir over prior methods, outperforming existing 2D approaches on the Off-Road Freespace dataset, and successful deployment on a real-world robotic system.

Technology Category

Application Category

📝 Abstract
Understanding open-world semantics is critical for robotic planning and control, particularly in unstructured outdoor environments. Current vision-language mapping approaches rely on object-centric segmentation priors, which often fail outdoors due to semantic ambiguities and indistinct semantic class boundaries. We propose OTAS - an Open-vocabulary Token Alignment method for Outdoor Segmentation. OTAS overcomes the limitations of open-vocabulary segmentation models by extracting semantic structure directly from the output tokens of pretrained vision models. By clustering semantically similar structures across single and multiple views and grounding them in language, OTAS reconstructs a geometrically consistent feature field that supports open-vocabulary segmentation queries. Our method operates zero-shot, without scene-specific fine-tuning, and runs at up to ~17 fps. OTAS provides a minor IoU improvement over fine-tuned and open-vocabulary 2D segmentation methods on the Off-Road Freespace Detection dataset. Our model achieves up to a 151% IoU improvement over open-vocabulary mapping methods in 3D segmentation on TartanAir. Real-world reconstructions demonstrate OTAS' applicability to robotic applications. The code and ROS node will be made publicly available upon paper acceptance.
Problem

Research questions and friction points this paper is trying to address.

Overcomes semantic ambiguities in outdoor segmentation
Enables open-vocabulary queries without scene fine-tuning
Improves 3D segmentation accuracy for robotic applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts semantic structure from pretrained vision tokens
Clusters similar structures across single and multiple views
Reconstructs geometrically consistent feature field
🔎 Similar Papers
No similar papers found.