Spatial RoboGrasp: Generalized Robotic Grasping Control Policy

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RGB-dependent grasping methods suffer from poor generalization, weak 3D geometric reasoning, and low robustness to illumination variations and occlusions. To address these limitations, this paper proposes a depth-aware 6-DoF grasping prompting mechanism that unifies multimodal perception and diffusion-based imitation learning within a unified 3D spatial representation. Our approach integrates domain randomization for simulation-to-real transfer, monocular depth estimation, depth-aware grasping prompt encoding, and a conditional diffusion policy network—eliminating reliance on hand-crafted features while ensuring strong geometric consistency and cross-environment generalization. Evaluated under challenging conditions—including complex lighting, severe occlusions, and high object diversity—our method achieves a 40% improvement in grasp success rate and a 45% increase in task success rate over state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Achieving generalizable and precise robotic manipulation across diverse environments remains a critical challenge, largely due to limitations in spatial perception. While prior imitation-learning approaches have made progress, their reliance on raw RGB inputs and handcrafted features often leads to overfitting and poor 3D reasoning under varied lighting, occlusion, and object conditions. In this paper, we propose a unified framework that couples robust multimodal perception with reliable grasp prediction. Our architecture fuses domain-randomized augmentation, monocular depth estimation, and a depth-aware 6-DoF Grasp Prompt into a single spatial representation for downstream action planning. Conditioned on this encoding and a high-level task prompt, our diffusion-based policy yields precise action sequences, achieving up to 40% improvement in grasp success and 45% higher task success rates under environmental variation. These results demonstrate that spatially grounded perception, paired with diffusion-based imitation learning, offers a scalable and robust solution for general-purpose robotic grasping.
Problem

Research questions and friction points this paper is trying to address.

Achieving generalizable robotic manipulation across diverse environments
Overcoming limitations in spatial perception for precise grasping
Improving grasp success rates under environmental variations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses domain-randomized augmentation with depth estimation
Uses depth-aware 6-DoF Grasp Prompt
Implements diffusion-based policy for precise actions
🔎 Similar Papers
No similar papers found.