Behavior-Regularized Diffusion Policy Optimization for Offline Reinforcement Learning

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In offline reinforcement learning, policy deviation from behavioral data induces out-of-distribution action risks. To address this, this work introduces behavioral regularization into the diffusion policy optimization framework for the first time. We propose an analytical KL divergence regularizer based on cumulative differences of inverse transition kernels, enabling explicit distributional constraints over reverse diffusion trajectories. Furthermore, we design a two-timescale Actor-Critic algorithm to ensure constrained policy convergence. Our method unifies diffusion-based policy parameterization with behavior cloning priors. Empirical evaluation on D4RL continuous-control benchmarks and synthetic 2D tasks demonstrates substantial improvements over existing diffusion-based and Gaussian policy methods—achieving superior performance, enhanced safety, and greater stability.

Technology Category

Application Category

📝 Abstract
The primary focus of offline reinforcement learning (RL) is to manage the risk of hazardous exploitation of out-of-distribution actions. An effective approach to achieve this goal is through behavior regularization, which augments conventional RL objectives by incorporating constraints that enforce the policy to remain close to the behavior policy. Nevertheless, existing literature on behavior-regularized RL primarily focuses on explicit policy parameterizations, such as Gaussian policies. Consequently, it remains unclear how to extend this framework to more advanced policy parameterizations, such as diffusion models. In this paper, we introduce BDPO, a principled behavior-regularized RL framework tailored for diffusion-based policies, thereby combining the expressive power of diffusion policies and the robustness provided by regularization. The key ingredient of our method is to calculate the Kullback-Leibler (KL) regularization analytically as the accumulated discrepancies in reverse-time transition kernels along the diffusion trajectory. By integrating the regularization, we develop an efficient two-time-scale actor-critic RL algorithm that produces the optimal policy while respecting the behavior constraint. Comprehensive evaluations conducted on synthetic 2D tasks and continuous control tasks from the D4RL benchmark validate its effectiveness and superior performance.
Problem

Research questions and friction points this paper is trying to address.

Offline reinforcement learning risk management
Behavior regularization for advanced policies
Diffusion-based policy optimization framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Behavior-regularized RL framework
Diffusion-based policy optimization
Two-time-scale actor-critic algorithm
🔎 Similar Papers
No similar papers found.