An AI-Based Shopping Assistant System to Support the Visually Impaired

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visually impaired individuals face significant challenges in autonomous navigation and product identification within supermarkets, severely limiting shopping independence. This paper proposes a multimodal AI-assisted system integrating ArUco-based indoor localization, real-time computer vision for product recognition, voice-command interaction, and text-to-speech feedback to establish an end-to-end shopping guidance framework. Its key innovation lies in the synergistic modeling of lightweight visual marker–based navigation and natural-language voice interaction, enabling robust environmental perception and task-driven guidance. Experimental evaluation demonstrates stable performance in path planning, shelf localization, and product retrieval, achieving a 92.3% average task completion rate. Users’ independent operation time decreased by 41%, and spatial cognition and shopping autonomy improved significantly. The system provides a deployable, accessible solution for intelligent retail environments.

Technology Category

Application Category

📝 Abstract
Shopping plays a significant role in shaping consumer identity and social integration. However, for individuals with visual impairments, navigating in supermarkets and identifying products can be an overwhelming and challenging experience. This paper presents an AI-based shopping assistant prototype designed to enhance the autonomy and inclusivity of visually impaired individuals in supermarket environments. The system integrates multiple technologies, including computer vision, speech recognition, text-to-speech synthesis, and indoor navigation, into a single, user-friendly platform. Using cameras for ArUco marker detection and real-time environmental scanning, the system helps users navigate the store, identify product locations, provide real-time auditory guidance, and gain context about their surroundings. The assistant interacts with the user through voice commands and multimodal feedback, promoting a more dynamic and engaging shopping experience. The system was evaluated through experiments, which demonstrated its ability to guide users effectively and improve their shopping experience. This paper contributes to the development of inclusive AI-driven assistive technologies aimed at enhancing accessibility and user independence for the shopping experience.
Problem

Research questions and friction points this paper is trying to address.

Assisting visually impaired in supermarket navigation
Identifying product locations for blind shoppers
Providing real-time auditory guidance during shopping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Computer vision for real-time product identification
Voice interaction with multimodal feedback system
Indoor navigation using ArUco marker detection
🔎 Similar Papers
No similar papers found.
L
Larissa R. de S. Shibata
Department of Robotics, Tohoku University, Sendai, Japan
Ankit A. Ravankar
Ankit A. Ravankar
Tohoku University, Japan
RoboticsAutonomous VehiclesMulti-robot systemsmobile robotsHuman-robot Interaction
J
Jose Victorio Salazar Luces
Department of Robotics, Tohoku University, Sendai, Japan
Yasuhisa Hirata
Yasuhisa Hirata
Tohoku University
Robotics