π€ AI Summary
This study investigates the dynamic human responses to robot failures and explanation strategies during long-term human-robot collaboration, aiming to enhance system robustness and trust maintenance. To this end, we introduce REFLEXβa novel multimodal dataset that uniquely synchronizes temporal annotations of both failure responses and explanation responses; encompasses diverse failure types, explanation levels, and explanation strategies; and captures RGB-D video, speech, and physiological signals, annotated at fine-grained AU, pose, and behavioral levels. Experiments employ a rigorously controlled, multi-strategy explanation comparison paradigm. The dataset comprises 120 hours of high-quality, expert-annotated data and is publicly released. REFLEX fills a critical gap in longitudinal trust-repair modeling, significantly advancing quantitative evaluation of failure explainability and enabling more accurate, data-driven trust prediction models.
π Abstract
This work presents REFLEX: Robotic Explanations to FaiLures and Human EXpressions, a comprehensive multimodal dataset capturing human reactions to robot failures and subsequent explanations in collaborative settings. It aims to facilitate research into human-robot interaction dynamics, addressing the need to study reactions to both initial failures and explanations, as well as the evolution of these reactions in long-term interactions. By providing rich, annotated data on human responses to different types of failures, explanation levels, and explanation varying strategies, the dataset contributes to the development of more robust, adaptive, and satisfying robotic systems capable of maintaining positive relationships with human collaborators, even during challenges like repeated failures.