Using the Pepper Robot to Support Sign Language Communication

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited accessibility of human-robot interaction (HRI) for Deaf users in public and assistive settings by investigating the comprehensibility of Italian Sign Language (LIS) expressions generated by the commercial social robot Pepper. Method: Employing a Deaf-led participatory design process involving Deaf students and LIS experts, we developed an animation library comprising 52 LIS signs and short sentences. Hand motion synthesis was achieved via MATLAB-based inverse kinematics computation augmented with manual fine-tuning of multi-joint articulation. A video-based user study assessed sign and sentence recognition performance. Contribution/Results: Results show high isolated sign recognition (>80%), though sentence comprehension is constrained by robotic actuation precision and temporal control. The work demonstrates the feasibility of deploying off-the-shelf social robots for foundational sign language expression, establishes the first Deaf-centered participatory framework for sign language robot design, and provides empirical evidence and methodological insights for multimodal augmentation and inclusive HRI.

Technology Category

Application Category

📝 Abstract
Social robots are increasingly experimented in public and assistive settings, but their accessibility for Deaf users remains quite underexplored. Italian Sign Language (LIS) is a fully-fledged natural language that relies on complex manual and non-manual components. Enabling robots to communicate using LIS could foster more inclusive human robot interaction, especially in social environments such as hospitals, airports, or educational settings. This study investigates whether a commercial social robot, Pepper, can produce intelligible LIS signs and short signed LIS sentences. With the help of a Deaf student and his interpreter, an expert in LIS, we co-designed and implemented 52 LIS signs on Pepper using either manual animation techniques or a MATLAB based inverse kinematics solver. We conducted a exploratory user study involving 12 participants proficient in LIS, both Deaf and hearing. Participants completed a questionnaire featuring 15 single-choice video-based sign recognition tasks and 2 open-ended questions on short signed sentences. Results shows that the majority of isolated signs were recognized correctly, although full sentence recognition was significantly lower due to Pepper's limited articulation and temporal constraints. Our findings demonstrate that even commercially available social robots like Pepper can perform a subset of LIS signs intelligibly, offering some opportunities for a more inclusive interaction design. Future developments should address multi-modal enhancements (e.g., screen-based support or expressive avatars) and involve Deaf users in participatory design to refine robot expressivity and usability.
Problem

Research questions and friction points this paper is trying to address.

Investigating Pepper robot's ability to produce intelligible Italian Sign Language
Addressing accessibility gap for Deaf users in human-robot interaction
Evaluating commercial robot's capacity for sign language communication
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used Pepper robot with manual animation techniques
Implemented MATLAB inverse kinematics solver
Co-designed 52 LIS signs with Deaf participants
🔎 Similar Papers
No similar papers found.
G
Giulia Botta
Politecnico di Torino, Torino, Italy
M
Marco Botta
Dipartimento di Informatica, Università di Torino, Italy
Cristina Gena
Cristina Gena
Associate professor of Computer Science, Università di Torino
HCIHuman Robot InteractionUser modelingHuman-centered AI
A
Alessandro Mazzei
Dipartimento di Informatica, Università di Torino, Italy
M
Massimo Donini
Dipartimento di Informatica, Università di Torino, Italy
A
Alberto Lillo
Politecnico di Torino, Torino, Italy