🤖 AI Summary
Current live-stream shopping platforms lack accessibility features for Deaf and Hard-of-Hearing (DHH) users, resulting in information inaccessibility or cognitive overload. To address this, we propose a mobile-based assistive system for DHH users that integrates real-time speech-to-text conversion, Rapid Serial Visual Presentation (RSVP), and large language models (LLMs) to enable live-stream transcription, intelligent summarization of key information, and low-cognitive-load visual delivery. The system supports adjustable summary length and semantics-driven information prioritization. In a controlled study with 38 DHH participants, our approach significantly improved information acquisition efficiency (+42.3%), task completion rate (+37.1%), and subjective usability (p < 0.01). This work establishes a scalable technical paradigm for accessible live-stream interaction.
📝 Abstract
Livestream shopping platforms often overlook the accessibility needs of the Deaf and Hard of Hearing (DHH) community, leading to barriers such as information inaccessibility and overload. To tackle these challenges, we developed extit{EchoAid}, a mobile app designed to improve the livestream shopping experience for DHH users. extit{EchoAid} utilizes advanced speech-to-text conversion, Rapid Serial Visual Presentation (RSVP) technology, and Large Language Models (LLMs) to simplify the complex information flow in live sales environments. We conducted exploratory studies with eight DHH individuals to identify design needs and iteratively developed the extit{EchoAid} prototype based on feedback from three participants. We then evaluate the performance of this system in a user study workshop involving 38 DHH participants. Our findings demonstrate the successful design and validation process of extit{EchoAid}, highlighting its potential to enhance product information extraction, leading to reduced cognitive overload and more engaging and customized shopping experiences for DHH users.