Robust Regularized Policy Iteration under Transition Uncertainty

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of offline reinforcement learning under distributional shift, which arises from encountering out-of-distribution state-action pairs. The authors formulate this challenge as a robust policy optimization problem by jointly optimizing the policy and the worst-case dynamics within an uncertainty set over the transition kernel. They introduce a KL-regularized robust policy iteration scheme that unifies the handling of policy extrapolation and model uncertainty. The resulting robust Bellman operator is shown to be γ-contractive and to guarantee monotonic policy improvement. Empirical evaluations on the D4RL benchmark demonstrate that the proposed method outperforms existing approaches such as PMDB on average, while exhibiting significantly reduced Q-values in regions of high epistemic uncertainty, thereby effectively avoiding out-of-distribution actions.

Technology Category

Application Category

📝 Abstract
Offline reinforcement learning (RL) enables data-efficient and safe policy learning without online exploration, but its performance often degrades under distribution shift. The learned policy may visit out-of-distribution state-action pairs where value estimates and learned dynamics are unreliable. To address policy-induced extrapolation and transition uncertainty in a unified framework, we formulate offline RL as robust policy optimization, treating the transition kernel as a decision variable within an uncertainty set and optimizing the policy against the worst-case dynamics. We propose Robust Regularized Policy Iteration (RRPI), which replaces the intractable max-min bilevel objective with a tractable KL-regularized surrogate and derives an efficient policy iteration procedure based on a robust regularized Bellman operator. We provide theoretical guarantees by showing that the proposed operator is a $γ$-contraction and that iteratively updating the surrogate yields monotonic improvement of the original robust objective with convergence. Experiments on D4RL benchmarks demonstrate that RRPI achieves strong average performance, outperforming recent baselines including percentile-based methods such as PMDB on the majority of environments while remaining competitive on the rest. Moreover, RRPI exhibits robust behavior. The learned $Q$-values decrease in regions with higher epistemic uncertainty, suggesting that the resulting policy avoids unreliable out-of-distribution actions under transition uncertainty.
Problem

Research questions and friction points this paper is trying to address.

offline reinforcement learning
distribution shift
transition uncertainty
out-of-distribution
robust policy optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robust Reinforcement Learning
Offline RL
Transition Uncertainty
Regularized Bellman Operator
Distributional Shift
🔎 Similar Papers
No similar papers found.
H
Hongqiang Lin
Zhejiang University
Z
Zhenghui Fu
Northwestern Polytechnical University
W
Weihao Tang
Zhejiang University
P
Pengfei Wang
Zhejiang University
Yiding Sun
Yiding Sun
Renmin University of China
Large Language ModelsExplainable Recommendation
Q
Qixian Huang
Sun Yat-sen University
Dongxu Zhang
Dongxu Zhang
Optum AI, PhD from UMass Amherst
LLMsnatural language processingrepresentation learningmachine learning