Prior-Guided Diffusion Planning for Offline Reinforcement Learning

πŸ“… 2025-05-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing diffusion-based policies for offline reinforcement learning suffer from inadequate multimodal action modeling, distributional shift, and high inference overhead. To address these issues, this paper proposes an efficient diffusion planning method. Its core contributions are threefold: (1) replacing the fixed Gaussian prior with a learnable prior to improve trajectory prior quality; (2) introducing Prior Guidanceβ€”a novel behavioral regularization framework in latent space that jointly ensures policy fidelity and long-horizon optimization capability; and (3) enabling high-return trajectory generation via single-step diffusion sampling, eliminating the need for multi-candidate sampling or real-time reward backtracking. Evaluated on multiple long-horizon offline RL benchmarks, our method consistently outperforms state-of-the-art diffusion policies and planners, achieving new SOTA performance in both trajectory quality and inference efficiency.

Technology Category

Application Category

πŸ“ Abstract
Diffusion models have recently gained prominence in offline reinforcement learning due to their ability to effectively learn high-performing, generalizable policies from static datasets. Diffusion-based planners facilitate long-horizon decision-making by generating high-quality trajectories through iterative denoising, guided by return-maximizing objectives. However, existing guided sampling strategies such as Classifier Guidance, Classifier-Free Guidance, and Monte Carlo Sample Selection either produce suboptimal multi-modal actions, struggle with distributional drift, or incur prohibitive inference-time costs. To address these challenges, we propose Prior Guidance (PG), a novel guided sampling framework that replaces the standard Gaussian prior of a behavior-cloned diffusion model with a learnable distribution, optimized via a behavior-regularized objective. PG directly generates high-value trajectories without costly reward optimization of the diffusion model itself, and eliminates the need to sample multiple candidates at inference for sample selection. We present an efficient training strategy that applies behavior regularization in latent space, and empirically demonstrate that PG outperforms state-of-the-art diffusion policies and planners across diverse long-horizon offline RL benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Improving diffusion-based planning in offline reinforcement learning
Addressing suboptimal actions and distributional drift issues
Reducing inference-time costs in guided sampling strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prior Guidance replaces Gaussian prior with learnable distribution
Behavior regularization applied in latent space
Direct high-value trajectory generation without reward optimization
πŸ”Ž Similar Papers
No similar papers found.