🤖 AI Summary
Existing robotic pointing gesture generation methods primarily focus on target identification, lacking unified modeling of contextual awareness and human-like naturalness. Method: This paper proposes a context-aware, human-like pointing gesture generation framework that integrates reinforcement learning with motion imitation. We first construct a comprehensive motion-capture dataset covering diverse pointing styles and targets; then model motion priors from this data and jointly optimize, in simulation, for both accuracy (target localization error) and naturalness (kinematic plausibility and posture adaptability). Contribution/Results: Experiments demonstrate that our method dynamically adjusts full-body pose according to spatial target location and environmental context, achieving millimeter-level pointing accuracy while significantly enhancing gesture naturalness and human–robot interaction fluency—marking a critical transition from “recognition” to “generation” in robotic pointing behavior.
📝 Abstract
Pointing is a key mode of interaction with robots, yet most prior work has focused on recognition rather than generation. We present a motion capture dataset of human pointing gestures covering diverse styles, handedness, and spatial targets. Using reinforcement learning with motion imitation, we train policies that reproduce human-like pointing while maximizing precision. Results show our approach enables context-aware pointing behaviors in simulation, balancing task performance with natural dynamics.