π€ AI Summary
Existing diffusion-based policies for offline reinforcement learning suffer from inadequate multimodal action modeling, distributional shift, and high inference overhead. To address these issues, this paper proposes an efficient diffusion planning method. Its core contributions are threefold: (1) replacing the fixed Gaussian prior with a learnable prior to improve trajectory prior quality; (2) introducing Prior Guidanceβa novel behavioral regularization framework in latent space that jointly ensures policy fidelity and long-horizon optimization capability; and (3) enabling high-return trajectory generation via single-step diffusion sampling, eliminating the need for multi-candidate sampling or real-time reward backtracking. Evaluated on multiple long-horizon offline RL benchmarks, our method consistently outperforms state-of-the-art diffusion policies and planners, achieving new SOTA performance in both trajectory quality and inference efficiency.
π Abstract
Diffusion models have recently gained prominence in offline reinforcement learning due to their ability to effectively learn high-performing, generalizable policies from static datasets. Diffusion-based planners facilitate long-horizon decision-making by generating high-quality trajectories through iterative denoising, guided by return-maximizing objectives. However, existing guided sampling strategies such as Classifier Guidance, Classifier-Free Guidance, and Monte Carlo Sample Selection either produce suboptimal multi-modal actions, struggle with distributional drift, or incur prohibitive inference-time costs. To address these challenges, we propose Prior Guidance (PG), a novel guided sampling framework that replaces the standard Gaussian prior of a behavior-cloned diffusion model with a learnable distribution, optimized via a behavior-regularized objective. PG directly generates high-value trajectories without costly reward optimization of the diffusion model itself, and eliminates the need to sample multiple candidates at inference for sample selection. We present an efficient training strategy that applies behavior regularization in latent space, and empirically demonstrate that PG outperforms state-of-the-art diffusion policies and planners across diverse long-horizon offline RL benchmarks.