GoalGrasp: Grasping Goals in Partially Occluded Scenarios without Grasp Training

📅 2024-05-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of enabling robots to execute user-specified 6-DOF grasps under partial occlusion without requiring grasp-specific training or pose annotations, this paper introduces GoalGrasp—the first method that achieves dense, stable grasp pose detection without any grasp pose labels or dedicated grasp training. Our approach integrates 3D object detection (RCV, trained solely on 2D bounding box annotations), geometry-aware sampling within 3D bounding boxes, human hand kinematic priors, and a novel grasp stability metric. It establishes an end-to-end unsupervised “detection + prior + evaluation” paradigm, enabling fully annotation-free grasp pose generation for the first time. Evaluated on 1,000 complex scenes, GoalGrasp achieves a 94% overall grasp success rate—maintaining 92% even under significant occlusion—while attaining state-of-the-art dense pose coverage and substantially outperforming existing methods in grasp stability.

Technology Category

Application Category

📝 Abstract
We present GoalGrasp, a simple yet effective 6-DOF robot grasp pose detection method that does not rely on grasp pose annotations and grasp training. Our approach enables user-specified object grasping in partially occluded scenes. By combining 3D bounding boxes and simple human grasp priors, our method introduces a novel paradigm for robot grasp pose detection. First, we employ a 3D object detector named RCV, which requires no 3D annotations, to achieve rapid 3D detection in new scenes. Leveraging the 3D bounding box and human grasp priors, our method achieves dense grasp pose detection. The experimental evaluation involves 18 common objects categorized into 7 classes based on shape. Without grasp training, our method generates dense grasp poses for 1000 scenes. We compare our method's grasp poses to existing approaches using a novel stability metric, demonstrating significantly higher grasp pose stability. In user-specified robot grasping experiments, our approach achieves a 94% grasp success rate. Moreover, in user-specified grasping experiments under partial occlusion, the success rate reaches 92%.
Problem

Research questions and friction points this paper is trying to address.

Detecting 6-DoF grasp poses for user-specified occluded objects
Eliminating reliance on grasp pose annotations and training
Improving grasp stability and success rates in cluttered scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

6-DoF grasp detection without grasp training
Combines 3D boxes and human grasp priors
Swift grasping with occlusion mitigation
🔎 Similar Papers
No similar papers found.
S
Shun Gui
Y
Y. Luximon