🤖 AI Summary
To address the challenge of independent mobility for visually impaired individuals, this paper proposes and implements BUDD-e, a novel guide robot prototype. The system integrates multimodal sensing, simultaneous localization and mapping (SLAM), dynamic environment perception, and natural human–robot interaction, achieving, for the first time, end-to-end fully autonomous navigation in a real-world clinical setting. User evaluations conducted at Niguarda Hospital in Milan demonstrate robust obstacle avoidance, precise path planning in complex indoor environments, and seamless multimodal interaction via speech and haptics. In trials with 12 visually impaired participants, task completion reached 96.3%, with an average subjective satisfaction score of 4.7/5.0. The core contribution lies in the tight coupling of high-robustness autonomous navigation with human-centered interaction design—significantly enhancing user independence and safety. This work validates the technical feasibility and practical applicability of such systems in real-world healthcare environments.
📝 Abstract
This paper describes the design and the realization of a prototype of the novel guide robot BUDD-e for visually impaired users. The robot has been tested in a real scenario with the help of visually disabled volunteers at ASST Grande Ospedale Metropolitano Niguarda, in Milan. The results of the experimental campaign are throughly described in the paper, displaying its remarkable performance and user-acceptance.