π€ AI Summary
This work addresses the challenge of transferring manipulation policies from human demonstrations to multi-fingered robotic hands, hindered by significant morphological and kinematic discrepancies. We propose AINA, the first framework that enables end-to-end learning of 3D point-cloud-based dexterous manipulation policies directly deployable on multi-fingered handsβusing only first-person hand-eye videos (RGB, depth, and 3D head/hand poses) captured naturally by untrained users wearing Aria Gen 2 glasses. AINA eliminates reliance on simulation, reinforcement learning, or online fine-tuning. Instead, it employs a 3D point-based policy network coupled with a context-aware action modeling paradigm to implicitly align human kinematics with robotic morphology, thereby substantially narrowing the embodiment gap. Evaluated on nine everyday manipulation tasks, the learned policies exhibit strong robustness and cross-task generalization. Critically, no robot-side data collection or post-hoc optimization is required, significantly improving real-world deployment efficiency and cross-scenario adaptability.
π Abstract
Learning multi-fingered robot policies from humans performing daily tasks in natural environments has long been a grand goal in the robotics community. Achieving this would mark significant progress toward generalizable robot manipulation in human environments, as it would reduce the reliance on labor-intensive robot data collection. Despite substantial efforts, progress toward this goal has been bottle-necked by the embodiment gap between humans and robots, as well as by difficulties in extracting relevant contextual and motion cues that enable learning of autonomous policies from in-the-wild human videos. We claim that with simple yet sufficiently powerful hardware for obtaining human data and our proposed framework AINA, we are now one significant step closer to achieving this dream. AINA enables learning multi-fingered policies from data collected by anyone, anywhere, and in any environment using Aria Gen 2 glasses. These glasses are lightweight and portable, feature a high-resolution RGB camera, provide accurate on-board 3D head and hand poses, and offer a wide stereo view that can be leveraged for depth estimation of the scene. This setup enables the learning of 3D point-based policies for multi-fingered hands that are robust to background changes and can be deployed directly without requiring any robot data (including online corrections, reinforcement learning, or simulation). We compare our framework against prior human-to-robot policy learning approaches, ablate our design choices, and demonstrate results across nine everyday manipulation tasks. Robot rollouts are best viewed on our website: https://aina-robot.github.io.