🤖 AI Summary
Multi-robot gripper grasping suffers from the lack of a unified coordinate representation and difficulty in modeling cross-gripper–object correspondences during grasp synthesis and transfer.
Method: We propose the Unified Gripper Coordinate Space (UGCS), a shared geometric representation built upon spherical coordinates to unify heterogeneous grippers. It enables, for the first time, point-to-point spherical coordinate mapping between grippers and objects. Our approach integrates a conditional variational autoencoder (CVAE), spherical parameterization, and correspondence-driven joint optimization of pose and joint angles.
Contribution/Results: UGCS supports zero-shot gripper generalization and cross-platform grasp transfer—from human hand demonstrations to previously unseen robotic grippers. Evaluated in both simulation and on real robotic platforms, our method consistently generates diverse, stable grasps and achieves high-success-rate grasp transfer across objects and hardware platforms.
📝 Abstract
We introduce a novel grasp representation named the Unified Gripper Coordinate Space (UGCS) for grasp synthesis and grasp transfer. Our representation leverages spherical coordinates to create a shared coordinate space across different robot grippers, enabling it to synthesize and transfer grasps for both novel objects and previously unseen grippers. The strength of this representation lies in the ability to map palm and fingers of a gripper and the unified coordinate space. Grasp synthesis is formulated as predicting the unified spherical coordinates on object surface points via a conditional variational autoencoder. The predicted unified gripper coordinates establish exact correspondences between the gripper and object points, which is used to optimize grasp pose and joint values. Grasp transfer is facilitated through the point-to-point correspondence between any two (potentially unseen) grippers and solved via a similar optimization. Extensive simulation and real-world experiments showcase the efficacy of the unified grasp representation for grasp synthesis in generating stable and diverse grasps. Similarly, we showcase real-world grasp transfer from human demonstrations across different objects.