🤖 AI Summary
This work proposes the design of hesitation-aware robotic motion that is both generalizable and interpretable by humans to enhance coordination, attention allocation, and safety in human–robot collaboration. By integrating demonstrations from professional dancers with kinesthetic teaching on a Franka Emika Panda manipulator—captured via an RGB-D full-body motion tracking system—the study constructs the first multimodal dataset of hesitation behaviors spanning three distinct levels of hesitation. The dataset comprises 70 full-body trajectories, 84 upper-limb trajectories, and 66 robot trajectories, encompassing both task-specific scenarios (e.g., approaching a block tower) and free-space movements. This effort delivers the first structured, cross-modal recording of hesitation behaviors, which is publicly released to establish a reproducible benchmark for research on human–robot hesitation interaction.
📝 Abstract
In human-robot collaboration, a robot's expression of hesitancy is a critical factor that shapes human coordination strategies, attention allocation, and safety-related judgments. However, designing hesitant robot motion that generalizes is challenging because the observer's inference is highly dependent on embodiment and context. To address these challenges, we introduce and open-source a multi-modal, dancer-generated dataset of hesitant motion where we focus on specific context-embodiment pairs (i.e., manipulator/human upper-limb approaching a Jenga Tower, and anthropomorphic whole body motion in free space). The dataset includes (i) kinesthetic teaching demonstrations on a Franka Emika Panda reaching from a fixed start configuration to a fixed target (a Jenga tower) with three graded hesitancy levels (slight, significant, extreme) and (ii) synchronized RGB-D motion capture of dancers performing the same reaching behavior using their upper limb across three hesitancy levels, plus full human body sequences for extreme hesitancy. We further provide documentation to enable reproducible benchmarking across robot and human modalities. Across all dancers, we obtained 70 unique whole-body trajectories, 84 upper limb trajectories spanning over the three hesitancy levels, and 66 kinesthetic teaching trajectories spanning over the three hesitancy levels. The dataset can be accessed here: https://brsrikrishna.github.io/Dance2Hesitate/.