🤖 AI Summary
This work addresses the challenge of enabling multifingered robotic hands to learn functional grasping without costly teleoperation demonstrations. We propose the first framework for functional grasp learning from unlabeled web-sourced RGB images: (1) automatic 3D hand–object interaction (HOI) pose reconstruction from web images; (2) kinematic retargeting of human hand poses to the robot hand, coupled with noise-robust alignment of object mesh geometry; and (3) physics-based augmentation via IsaacGym simulation and end-to-end policy learning. Our core contribution is a unified paradigm—HOI reconstruction, hand retargeting, and object alignment—that replaces manual teleoperation with scalable web imagery, enabling zero-shot functional generalization to unseen objects. In simulation, our method achieves an 83.4% success rate on nine novel objects (+6.7% absolute improvement) and a 1.8× gain in functional scoring. Real-world validation on the LEAP Hand yields an 85% grasp success rate.
📝 Abstract
Functional grasp is essential for enabling dexterous multi-finger robot hands to manipulate objects effectively. However, most prior work either focuses on power grasping, which simply involves holding an object still, or relies on costly teleoperated robot demonstrations to teach robots how to grasp each object functionally. Instead, we propose extracting human grasp information from web images since they depict natural and functional object interactions, thereby bypassing the need for curated demonstrations. We reconstruct human hand-object interaction (HOI) 3D meshes from RGB images, retarget the human hand to multi-finger robot hands, and align the noisy object mesh with its accurate 3D shape. We show that these relatively low-quality HOI data from inexpensive web sources can effectively train a functional grasping model. To further expand the grasp dataset for seen and unseen objects, we use the initially-trained grasping policy with web data in the IsaacGym simulator to generate physically feasible grasps while preserving functionality. We train the grasping model on 10 object categories and evaluate it on 9 unseen objects, including challenging items such as syringes, pens, spray bottles, and tongs, which are underrepresented in existing datasets. The model trained on the web HOI dataset, achieving a 75.8% success rate on seen objects and 61.8% across all objects in simulation, with a 6.7% improvement in success rate and a 1.8x increase in functionality ratings over baselines. Simulator-augmented data further boosts performance from 61.8% to 83.4%. The sim-to-real transfer to the LEAP Hand achieves a 85% success rate. Project website is at: https://webgrasp.github.io/.