🤖 AI Summary
Robot grasping models trained on limited real-world data struggle to generalize to geometrically diverse novel objects in industrial settings. Method: We introduce GraspNet-1B, the first large-scale, object-centric synthetic grasping dataset, comprising over 100 million 6-DoF grasp poses across tens of thousands of 3D objects and major grippers (e.g., Franka Panda, Robotiq 2F-85), generated via high-fidelity physics simulation to maximize geometric diversity and cross-object generalization. Contribution/Results: Models trained on GraspNet-1B achieve significantly higher grasp success rates on unseen objects in both simulation and real-robot experiments. The dataset establishes a scalable, standardized benchmark for data-driven grasping generalization in industrial applications, enabling robust, practical deployment without requiring extensive real-world annotation.
📝 Abstract
Robotic grasping is a crucial task in industrial automation, where robots are increasingly expected to handle a wide range of objects. However, a significant challenge arises when robot grasping models trained on limited datasets encounter novel objects. In real-world environments such as warehouses or manufacturing plants, the diversity of objects can be vast, and grasping models need to generalize to this diversity. Training large, generalizable robot-grasping models requires geometrically diverse datasets. In this paper, we introduce GraspFactory, a dataset containing over 109 million 6-DoF grasps collectively for the Franka Panda (with 14,690 objects) and Robotiq 2F-85 grippers (with 33,710 objects). GraspFactory is designed for training data-intensive models, and we demonstrate the generalization capabilities of one such model trained on a subset of GraspFactory in both simulated and real-world settings. The dataset and tools are made available for download at https://graspfactory.github.io/.