🤖 AI Summary
To address policy transfer failure in simulation-to-reality (Sim2Real) reinforcement learning caused by dynamics mismatch, this paper proposes a latent-space residual calibration method. Instead of modeling state-transition residuals in high-dimensional pixel space—where pixel-level reconstruction is challenging—it learns residuals in a low-dimensional latent space. The approach integrates an autoregressive latent world model, simulation-based pretraining, and fine-tuning on minimal real-world data, enabling imagination-based rollouts under calibrated dynamics for policy optimization. Its core contribution is the first formulation of residual correction in latent space, achieving robust dynamics adaptation with only a small amount of real interaction data. Experiments demonstrate significant improvements in cross-domain generalization across multiple visual MuJoCo benchmarks and a real-world robotic vision-based lane-following task, outperforming state-of-the-art Sim2Real transfer methods.
📝 Abstract
Simulation-to-reality reinforcement learning (RL) faces the critical challenge of reconciling discrepancies between simulated and real-world dynamics, which can severely degrade agent performance. A promising approach involves learning corrections to simulator forward dynamics represented as a residual error function, however this operation is impractical with high-dimensional states such as images. To overcome this, we propose ReDRAW, a latent-state autoregressive world model pretrained in simulation and calibrated to target environments through residual corrections of latent-state dynamics rather than of explicit observed states. Using this adapted world model, ReDRAW enables RL agents to be optimized with imagined rollouts under corrected dynamics and then deployed in the real world. In multiple vision-based MuJoCo domains and a physical robot visual lane-following task, ReDRAW effectively models changes to dynamics and avoids overfitting in low data regimes where traditional transfer methods fail.