Adapting Robot's Explanation for Failures Based on Observed Human Behavior in Human-Robot Collaboration

๐Ÿ“… 2025-04-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address user confusion arising from mismatches between robotic failure explanations and human cognitive models in human-robot collaboration, this paper proposes a multimodal behaviorโ€“driven adaptive explanation framework. It introduces the first data-driven confusion prediction model, trained on real-time facial expressions, eye movements, and hand gestures, and integrates a closed-loop decision mechanism to dynamically modulate explanation granularity. This shifts explanation generation from static, predefined strategies to behavior-adaptive ones. In a user study with 55 participants, the system significantly reduced confusion rates while maintaining comprehension accuracy and, on average, shortened explanations by 32.7%. Key contributions include: (1) the first multimodal confusion prediction model; (2) a deployable, real-time adaptive explanation generation mechanism; and (3) empirical validation that behavioral feedback enhances both the naturalness and efficiency of human-robot collaboration.

Technology Category

Application Category

๐Ÿ“ Abstract
This work aims to interpret human behavior to anticipate potential user confusion when a robot provides explanations for failure, allowing the robot to adapt its explanations for more natural and efficient collaboration. Using a dataset that included facial emotion detection, eye gaze estimation, and gestures from 55 participants in a user study, we analyzed how human behavior changed in response to different types of failures and varying explanation levels. Our goal is to assess whether human collaborators are ready to accept less detailed explanations without inducing confusion. We formulate a data-driven predictor to predict human confusion during robot failure explanations. We also propose and evaluate a mechanism, based on the predictor, to adapt the explanation level according to observed human behavior. The promising results from this evaluation indicate the potential of this research in adapting a robot's explanations for failures to enhance the collaborative experience.
Problem

Research questions and friction points this paper is trying to address.

Interpreting human behavior to anticipate confusion during robot failure explanations
Assessing human readiness for less detailed explanations without causing confusion
Adapting robot's explanation levels based on observed human behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts robot explanations based on human behavior
Uses facial emotion, gaze, and gesture data
Predicts human confusion to adjust explanation level
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Andreas Naoum
Division of Robotics, Perception and Learning (RPL), EECS, KTH Royal Institute of Technology, Sweden
P
P. Khanna
Division of Robotics, Perception and Learning (RPL), EECS, KTH Royal Institute of Technology, Sweden
Elmira Yadollahi
Elmira Yadollahi
Assistant Professor (UK Lecturer) at Lancaster University
Explainability in RoboticsPerspective-TakingSocial AIHuman-Robot InteractionChild-Robot
M
Maarten Bjorkman
Division of Robotics, Perception and Learning (RPL), EECS, KTH Royal Institute of Technology, Sweden
Christian Smith
Christian Smith
Associate Professor, Division of Robotics, Perception, and Learning, KTH
robotics