🤖 AI Summary
Current vision-language models (VLMs) exhibit limited capability in modeling 3D spatial relations (e.g., “front-left”, “diagonally behind”), undermining their reliability in physical navigation tasks such as visual assistance for the blind. To address this, we propose the Spatially-Aware Instruction Tuning (SAIT) dataset and the Spatially-Aware Benchmark (SA-Bench)—the first instruction-tuning paradigm for blind navigation, built upon automatically synthesized 3D virtual paths. We design a multi-granularity spatial relation annotation scheme and a VLM-alignment evaluation protocol, establishing the first dedicated VLM evaluation standard for accessible navigation. Experiments demonstrate that SAIT-finetuned models achieve significant gains over state-of-the-art methods on SA-Bench. All data, benchmarks, and code are publicly released to advance research in spatial reasoning and accessible AI.
📝 Abstract
Guide dog robots offer promising solutions to enhance mobility and safety for visually impaired individuals, addressing the limitations of traditional guide dogs, particularly in perceptual intelligence and communication. With the emergence of Vision-Language Models (VLMs), robots are now capable of generating natural language descriptions of their surroundings, aiding in safer decision-making. However, existing VLMs often struggle to accurately interpret and convey spatial relationships, which is crucial for navigation in complex environments such as street crossings. We introduce the Space-Aware Instruction Tuning (SAIT) dataset and the Space-Aware Benchmark (SA-Bench) to address the limitations of current VLMs in understanding physical environments. Our automated data generation pipeline focuses on the virtual path to the destination in 3D space and the surroundings, enhancing environmental comprehension and enabling VLMs to provide more accurate guidance to visually impaired individuals. We also propose an evaluation protocol to assess VLM effectiveness in delivering walking guidance. Comparative experiments demonstrate that our space-aware instruction-tuned model outperforms state-of-the-art algorithms. We have fully open-sourced the SAIT dataset and SA-Bench, along with the related code, at https://github.com/byungokhan/Space-awareVLM