🤖 AI Summary
Traditional world models rely solely on image prediction, which hinders their ability to learn action-relevant representations essential for control tasks, thereby limiting policy performance. This work proposes the World-Action Model (WAM), which introduces inverse dynamics regularization into the DreamerV2 framework for the first time, jointly modeling future observations and the actions that drive state transitions to explicitly encode action information in the latent space. Without modifying the policy architecture, WAM enables pretraining of diffusion policies in the latent space and integrates behavioral cloning with in-model PPO fine-tuning. On the CALVIN benchmark, this approach improves behavioral cloning success rates from 59.4% to 71.2%, and achieves 92.8% after PPO fine-tuning—surpassing the baseline of 79.8%—with some tasks reaching 100% success while reducing training steps by 8.7×.
📝 Abstract
This paper presents the World-Action Model (WAM), an action-regularized world model that jointly reasons over future visual observations and the actions that drive state transitions. Unlike conventional world models trained solely via image prediction, WAM incorporates an inverse dynamics objective into DreamerV2 that predicts actions from latent state transitions, encouraging the learned representations to capture action-relevant structure critical for downstream control. We evaluate WAM on enhancing policy learning across eight manipulation tasks from the CALVIN benchmark. We first pretrain a diffusion policy via behavioral cloning on world model latents, then refine it with model-based PPO inside the frozen world model. Without modifying the policy architecture or training procedure, WAM improves average behavioral cloning success from 59.4% to 71.2% over DreamerV2 and DiWA baselines. After PPO fine-tuning, WAM achieves 92.8% average success versus 79.8% for the baseline, with two tasks reaching 100%, using 8.7x fewer training steps.