🤖 AI Summary
In reinforcement learning, implicit confounders jointly influence states and actions, causing policies to rely on spurious statistical correlations rather than genuine causal effects—leading to biased decision-making, suboptimal performance, and poor generalization. To address this, we propose the first causal-robust policy learning framework that requires neither ground-truth confounders nor causal supervision. Grounded in the backdoor criterion and do-calculus, our method introduces a learnable Backdoor Reconstructor module that infers pseudo-historical variables from current states, enabling causal effect estimation directly from observational data. Integrated into the soft Actor-Critic architecture, the module supports end-to-end differentiable training. Evaluated on continuous-control benchmarks, our approach significantly improves policy robustness, cross-task generalization, and deployment reliability—outperforming state-of-the-art baselines across all metrics.
📝 Abstract
Hidden confounders that influence both states and actions can bias policy learning in reinforcement learning (RL), leading to suboptimal or non-generalizable behavior. Most RL algorithms ignore this issue, learning policies from observational trajectories based solely on statistical associations rather than causal effects. We propose DoSAC (Do-Calculus Soft Actor-Critic with Backdoor Adjustment), a principled extension of the SAC algorithm that corrects for hidden confounding via causal intervention estimation. DoSAC estimates the interventional policy $pi(a | mathrm{do}(s))$ using the backdoor criterion, without requiring access to true confounders or causal labels. To achieve this, we introduce a learnable Backdoor Reconstructor that infers pseudo-past variables (previous state and action) from the current state to enable backdoor adjustment from observational data. This module is integrated into a soft actor-critic framework to compute both the interventional policy and its entropy. Empirical results on continuous control benchmarks show that DoSAC outperforms baselines under confounded settings, with improved robustness, generalization, and policy reliability.