Open-Vocabulary Semantic Segmentation with Uncertainty Alignment for Robotic Scene Understanding in Indoor Building Environments

📅 2025-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Indoor assistive robots for elderly and disabled individuals struggle to interpret open-vocabulary natural language instructions in complex, ambiguous environments; existing closed-vocabulary models lack explicit uncertainty modeling, resulting in poor robustness in semantic segmentation and navigation. Method: We propose a novel three-stage open-vocabulary scene understanding framework—“Segment–Detect–Select”—featuring the first uncertainty alignment mechanism that tightly integrates vision-language models (VLMs) and large language models (LLMs) to enable cross-modal zero-shot semantic segmentation and functional region identification. Contribution/Results: Evaluated in real indoor settings, our approach achieves an 18.7% improvement in segmentation accuracy and exceeds 92% success rate in executing natural language instructions, significantly enhancing robust comprehension of unknown or ambiguous regions and enabling reliable task-oriented navigation.

Technology Category

Application Category

📝 Abstract
The global rise in the number of people with physical disabilities, in part due to improvements in post-trauma survivorship and longevity, has amplified the demand for advanced assistive technologies to improve mobility and independence. Autonomous assistive robots, such as smart wheelchairs, require robust capabilities in spatial segmentation and semantic recognition to navigate complex built environments effectively. Place segmentation involves delineating spatial regions like rooms or functional areas, while semantic recognition assigns semantic labels to these regions, enabling accurate localization to user-specific needs. Existing approaches often utilize deep learning; however, these close-vocabulary detection systems struggle to interpret intuitive and casual human instructions. Additionally, most existing methods ignore the uncertainty of the scene recognition problem, leading to low success rates, particularly in ambiguous and complex environments. To address these challenges, we propose an open-vocabulary scene semantic segmentation and detection pipeline leveraging Vision Language Models (VLMs) and Large Language Models (LLMs). Our approach follows a 'Segment Detect Select' framework for open-vocabulary scene classification, enabling adaptive and intuitive navigation for assistive robots in built environments.
Problem

Research questions and friction points this paper is trying to address.

Enhance robotic scene understanding in indoor environments
Address uncertainty in open-vocabulary semantic segmentation
Improve assistive robot navigation with adaptive recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-vocabulary segmentation with VLMs
Uncertainty alignment for robust recognition
Segment Detect Select framework for classification
🔎 Similar Papers
No similar papers found.
Y
Yifan Xu
Department of Civil and Environmental Engineering, University of Michigan, Ann Arbor, MI, USA
V
Vineet Kamat
Department of Civil and Environmental Engineering, University of Michigan, Ann Arbor, MI, USA
Carol Menassa
Carol Menassa
Professor of Civil and Environmental Engineering, University of Michigan
Sustainable ConstructionSimulationHuman Infrastructure InteractionFinance