CausalGDP: Causality-Guided Diffusion Policies for Reinforcement Learning

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel approach that integrates causal inference into diffusion-based reinforcement learning, addressing the limitation of existing diffusion policies that rely solely on statistical correlations and thus fail to identify action components with genuine causal effects on returns. The method first jointly learns a base diffusion policy and a causal dynamics model from offline data, then continuously refines the causal structure through online interaction, leveraging a causally guided mechanism to optimize action generation. By unifying causal discovery, diffusion probabilistic modeling, and offline reinforcement learning, the approach achieves significantly superior performance—both in terms of efficacy and stability—over current state-of-the-art methods in complex, high-dimensional control tasks, thereby highlighting the critical role of causal modeling in policy learning.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has achieved remarkable success in a wide range of sequential decision-making problems. Recent diffusion-based policies further improve RL by modeling complex, high-dimensional action distributions. However, existing diffusion policies primarily rely on statistical associations and fail to explicitly account for causal relationships among states, actions, and rewards, limiting their ability to identify which action components truly cause high returns. In this paper, we propose Causality-guided Diffusion Policy (CausalGDP), a unified framework that integrates causal reasoning into diffusion-based RL. CausalGDP first learns a base diffusion policy and an initial causal dynamical model from offline data, capturing causal dependencies among states, actions, and rewards. During real-time interaction, the causal information is continuously updated and incorporated as a guidance signal to steer the diffusion process toward actions that causally influence future states and rewards. By explicitly considering causality beyond association, CausalGDP focuses policy optimization on action components that genuinely drive performance improvements. Experimental results demonstrate that CausalGDP consistently achieves competitive or superior performance over state-of-the-art diffusion-based and offline RL methods, especially in complex, high-dimensional control tasks.
Problem

Research questions and friction points this paper is trying to address.

causality
diffusion policy
reinforcement learning
causal reasoning
action optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

causal reasoning
diffusion policy
reinforcement learning
offline RL
causal dynamics
🔎 Similar Papers
No similar papers found.
X
Xiaofeng Xiao
Department of Mechanical & Industrial Engineering, Northeastern University, Boston, MA, USA
X
Xiao Hu
Department of Civil and Environmental Engineering, Northeastern University, Boston, MA, USA
Y
Yang Ye
Department of Civil and Environmental Engineering, Northeastern University, Boston, MA, USA
Xubo Yue
Xubo Yue
Assistant Professor, Northeastern University
Causal learningGaussian processBayesian Optimizationfederated learning