Spatial-Temporal Aware Visuomotor Diffusion Policy Learning

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-based imitation learning approaches rely on behavioral cloning and supervised trajectory data, failing to adequately model the 3D spatial structure and 4D spatiotemporal dynamics essential for real-world robotic manipulation. To address this, we propose Diffusion Policy 4D (DP4), the first framework to integrate a dynamic Gaussian world model into robotic vision-based imitation learning. DP4 jointly reconstructs current and future multi-frame 3D scenes from single-view RGB-D inputs, explicitly capturing spatiotemporal dependencies. Our method unifies diffusion-based policy learning, 4D spatiotemporal forecasting, and 3D dynamic scene modeling—overcoming the perceptual limitations of conventional trajectory cloning. Evaluated on 173 simulated task variants, DP4 achieves average success rate improvements of 6.45%–16.4%. On three real-robot manipulation tasks, it yields an 8.6% absolute gain, significantly enhancing generalization and spatiotemporal reasoning under complex, long-horizon interactions.

Technology Category

Application Category

📝 Abstract
Visual imitation learning is effective for robots to learn versatile tasks. However, many existing methods rely on behavior cloning with supervised historical trajectories, limiting their 3D spatial and 4D spatiotemporal awareness. Consequently, these methods struggle to capture the 3D structures and 4D spatiotemporal relationships necessary for real-world deployment. In this work, we propose 4D Diffusion Policy (DP4), a novel visual imitation learning method that incorporates spatiotemporal awareness into diffusion-based policies. Unlike traditional approaches that rely on trajectory cloning, DP4 leverages a dynamic Gaussian world model to guide the learning of 3D spatial and 4D spatiotemporal perceptions from interactive environments. Our method constructs the current 3D scene from a single-view RGB-D observation and predicts the future 3D scene, optimizing trajectory generation by explicitly modeling both spatial and temporal dependencies. Extensive experiments across 17 simulation tasks with 173 variants and 3 real-world robotic tasks demonstrate that the 4D Diffusion Policy (DP4) outperforms baseline methods, improving the average simulation task success rate by 16.4% (Adroit), 14% (DexArt), and 6.45% (RLBench), and the average real-world robotic task success rate by 8.6%.
Problem

Research questions and friction points this paper is trying to address.

Enhancing 3D spatial and 4D spatiotemporal awareness in robots
Overcoming limitations of behavior cloning with supervised trajectories
Improving real-world deployment via dynamic Gaussian world modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 4D Diffusion Policy for spatiotemporal awareness
Leverages dynamic Gaussian world model
Constructs 3D scenes from single-view RGB-D
🔎 Similar Papers
No similar papers found.