Causal Policy Learning in Reinforcement Learning: Backdoor-Adjusted Soft Actor-Critic

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In reinforcement learning, implicit confounders jointly influence states and actions, causing policies to rely on spurious statistical correlations rather than genuine causal effects—leading to biased decision-making, suboptimal performance, and poor generalization. To address this, we propose the first causal-robust policy learning framework that requires neither ground-truth confounders nor causal supervision. Grounded in the backdoor criterion and do-calculus, our method introduces a learnable Backdoor Reconstructor module that infers pseudo-historical variables from current states, enabling causal effect estimation directly from observational data. Integrated into the soft Actor-Critic architecture, the module supports end-to-end differentiable training. Evaluated on continuous-control benchmarks, our approach significantly improves policy robustness, cross-task generalization, and deployment reliability—outperforming state-of-the-art baselines across all metrics.

Technology Category

Application Category

📝 Abstract
Hidden confounders that influence both states and actions can bias policy learning in reinforcement learning (RL), leading to suboptimal or non-generalizable behavior. Most RL algorithms ignore this issue, learning policies from observational trajectories based solely on statistical associations rather than causal effects. We propose DoSAC (Do-Calculus Soft Actor-Critic with Backdoor Adjustment), a principled extension of the SAC algorithm that corrects for hidden confounding via causal intervention estimation. DoSAC estimates the interventional policy $pi(a | mathrm{do}(s))$ using the backdoor criterion, without requiring access to true confounders or causal labels. To achieve this, we introduce a learnable Backdoor Reconstructor that infers pseudo-past variables (previous state and action) from the current state to enable backdoor adjustment from observational data. This module is integrated into a soft actor-critic framework to compute both the interventional policy and its entropy. Empirical results on continuous control benchmarks show that DoSAC outperforms baselines under confounded settings, with improved robustness, generalization, and policy reliability.
Problem

Research questions and friction points this paper is trying to address.

Addresses hidden confounders biasing RL policy learning
Proposes causal intervention for unbiased policy estimation
Enhances robustness and generalization in RL policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

DoSAC integrates backdoor adjustment in RL
Learnable Backdoor Reconstructor infers pseudo-past variables
Interventional policy estimation without true confounders
🔎 Similar Papers
No similar papers found.
T
Thanh Vinh Vo
National University of Singapore
Y
Young Lee
National University of Singapore
H
Haozhe Ma
National University of Singapore
Chien Lu
Chien Lu
Trinity College Dublin, The University of Dublin
Tze-Yun Leong
Tze-Yun Leong
National University of Singapore
Artificial intelligencebiomedical informatics