🤖 AI Summary
Surgical robot automation faces challenges in real-world settings due to high noise and frequent failures in demonstration data, leading to poor robustness in policy learning. To address this, we propose DSP, a diffusion-based policy learning framework that introduces a novel two-stage diffusion stabilization training paradigm: (1) adaptive perturbation filtering via action prediction error estimation, and (2) incremental policy refinement. DSP enables reliable training and cross-task transfer directly from noisy or even failed demonstration trajectories—without requiring high-fidelity expert demonstrations. Evaluated across multiple surgical simulation environments, DSP significantly improves policy accuracy and robustness: performance degradation under perturbations is reduced by 42%, and generalization surpasses existing imitation learning approaches.
📝 Abstract
Intelligent surgical robots have the potential to revolutionize clinical practice by enabling more precise and automated surgical procedures. However, the automation of such robot for surgical tasks remains under-explored compared to recent advancements in solving household manipulation tasks. These successes have been largely driven by (1) advanced models, such as transformers and diffusion models, and (2) large-scale data utilization. Aiming to extend these successes to the domain of surgical robotics, we propose a diffusion-based policy learning framework, called Diffusion Stabilizer Policy (DSP), which enables training with imperfect or even failed trajectories. Our approach consists of two stages: first, we train the diffusion stabilizer policy using only clean data. Then, the policy is continuously updated using a mixture of clean and perturbed data, with filtering based on the prediction error on actions. Comprehensive experiments conducted in various surgical environments demonstrate the superior performance of our method in perturbation-free settings and its robustness when handling perturbed demonstrations.