🤖 AI Summary
Indoor assistive robots for elderly and disabled individuals struggle to interpret open-vocabulary natural language instructions in complex, ambiguous environments; existing closed-vocabulary models lack explicit uncertainty modeling, resulting in poor robustness in semantic segmentation and navigation. Method: We propose a novel three-stage open-vocabulary scene understanding framework—“Segment–Detect–Select”—featuring the first uncertainty alignment mechanism that tightly integrates vision-language models (VLMs) and large language models (LLMs) to enable cross-modal zero-shot semantic segmentation and functional region identification. Contribution/Results: Evaluated in real indoor settings, our approach achieves an 18.7% improvement in segmentation accuracy and exceeds 92% success rate in executing natural language instructions, significantly enhancing robust comprehension of unknown or ambiguous regions and enabling reliable task-oriented navigation.
📝 Abstract
The global rise in the number of people with physical disabilities, in part due to improvements in post-trauma survivorship and longevity, has amplified the demand for advanced assistive technologies to improve mobility and independence. Autonomous assistive robots, such as smart wheelchairs, require robust capabilities in spatial segmentation and semantic recognition to navigate complex built environments effectively. Place segmentation involves delineating spatial regions like rooms or functional areas, while semantic recognition assigns semantic labels to these regions, enabling accurate localization to user-specific needs. Existing approaches often utilize deep learning; however, these close-vocabulary detection systems struggle to interpret intuitive and casual human instructions. Additionally, most existing methods ignore the uncertainty of the scene recognition problem, leading to low success rates, particularly in ambiguous and complex environments. To address these challenges, we propose an open-vocabulary scene semantic segmentation and detection pipeline leveraging Vision Language Models (VLMs) and Large Language Models (LLMs). Our approach follows a 'Segment Detect Select' framework for open-vocabulary scene classification, enabling adaptive and intuitive navigation for assistive robots in built environments.