Enhancing Policy Learning with World-Action Model

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional world models rely solely on image prediction, which hinders their ability to learn action-relevant representations essential for control tasks, thereby limiting policy performance. This work proposes the World-Action Model (WAM), which introduces inverse dynamics regularization into the DreamerV2 framework for the first time, jointly modeling future observations and the actions that drive state transitions to explicitly encode action information in the latent space. Without modifying the policy architecture, WAM enables pretraining of diffusion policies in the latent space and integrates behavioral cloning with in-model PPO fine-tuning. On the CALVIN benchmark, this approach improves behavioral cloning success rates from 59.4% to 71.2%, and achieves 92.8% after PPO fine-tuning—surpassing the baseline of 79.8%—with some tasks reaching 100% success while reducing training steps by 8.7×.
📝 Abstract
This paper presents the World-Action Model (WAM), an action-regularized world model that jointly reasons over future visual observations and the actions that drive state transitions. Unlike conventional world models trained solely via image prediction, WAM incorporates an inverse dynamics objective into DreamerV2 that predicts actions from latent state transitions, encouraging the learned representations to capture action-relevant structure critical for downstream control. We evaluate WAM on enhancing policy learning across eight manipulation tasks from the CALVIN benchmark. We first pretrain a diffusion policy via behavioral cloning on world model latents, then refine it with model-based PPO inside the frozen world model. Without modifying the policy architecture or training procedure, WAM improves average behavioral cloning success from 59.4% to 71.2% over DreamerV2 and DiWA baselines. After PPO fine-tuning, WAM achieves 92.8% average success versus 79.8% for the baseline, with two tasks reaching 100%, using 8.7x fewer training steps.
Problem

Research questions and friction points this paper is trying to address.

policy learning
world model
action regularization
robotic manipulation
behavioral cloning
Innovation

Methods, ideas, or system contributions that make the work stand out.

World-Action Model
inverse dynamics
action-regularized world model
model-based reinforcement learning
diffusion policy
🔎 Similar Papers
No similar papers found.
Y
Yuci Han
Photogrammetry and Computer Vision Lab, The Ohio State University, Columbus, OH 43210, USA
Alper Yilmaz
Alper Yilmaz
Professor, The Ohio State University
Biomimetic NavigationDeep learningComputer VisionPhotogrammetry