FlowDreamer: A RGB-D World Model with Flow-based Motion Representations for Robot Manipulation

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient prediction accuracy of RGB-D world models in robotic manipulation tasks. We propose FlowDreamer—the first framework to explicitly incorporate 3D scene flow as a motion representation into RGB-D world modeling. Our method decouples motion modeling from visual generation: a U-Net architecture predicts dense 3D scene flow, and a conditional diffusion model synthesizes future RGB-D frames conditioned on the predicted flow field and historical observations. The entire model is end-to-end trainable and supports multimodal RGB-D fusion. Evaluated on four benchmarks, FlowDreamer significantly outperforms prior methods—achieving a 7% improvement in semantic similarity, an 11% gain in pixel-level reconstruction quality, and a 6% increase in robotic manipulation success rate. These results empirically validate that explicit 3D motion modeling provides critical performance gains for visual world models.

Technology Category

Application Category

📝 Abstract
This paper investigates training better visual world models for robot manipulation, i.e., models that can predict future visual observations by conditioning on past frames and robot actions. Specifically, we consider world models that operate on RGB-D frames (RGB-D world models). As opposed to canonical approaches that handle dynamics prediction mostly implicitly and reconcile it with visual rendering in a single model, we introduce FlowDreamer, which adopts 3D scene flow as explicit motion representations. FlowDreamer first predicts 3D scene flow from past frame and action conditions with a U-Net, and then a diffusion model will predict the future frame utilizing the scene flow. FlowDreamer is trained end-to-end despite its modularized nature. We conduct experiments on 4 different benchmarks, covering both video prediction and visual planning tasks. The results demonstrate that FlowDreamer achieves better performance compared to other baseline RGB-D world models by 7% on semantic similarity, 11% on pixel quality, and 6% on success rate in various robot manipulation domains.
Problem

Research questions and friction points this paper is trying to address.

Training visual world models for robot manipulation
Predicting future RGB-D frames using scene flow
Improving performance in video prediction and planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 3D scene flow as explicit motion representations
Combines U-Net and diffusion model for prediction
End-to-end training despite modularized design
🔎 Similar Papers
No similar papers found.