Efficient Policy Optimization in Robust Constrained MDPs with Iteration Complexity Guarantees

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies robust constrained Markov decision processes (RCMDPs) under model misspecification: learning policies that satisfy constraints and achieve high reward under the worst-case stochastic environment within an uncertainty set, given only access to a biased nominal model. We propose the first primal-dual algorithm for RCMDPs that avoids binary search—overcoming the fundamental challenges of absent strong duality and heterogeneous worst-case models across dual variables. Our method is the first to provide an $mathcal{O}(varepsilon^{-2})$ iteration complexity guarantee, yielding an $varepsilon$-optimal and strictly feasible solution. The theoretical analysis leverages robust optimization principles and a novel decomposition technique for composite value functions. Empirical evaluation demonstrates that our algorithm achieves 4–6× speedup over state-of-the-art methods, significantly reducing computational overhead—especially under both high and low discount factors.

Technology Category

Application Category

📝 Abstract
Constrained decision-making is essential for designing safe policies in real-world control systems, yet simulated environments often fail to capture real-world adversities. We consider the problem of learning a policy that will maximize the cumulative reward while satisfying a constraint, even when there is a mismatch between the real model and an accessible simulator/nominal model. In particular, we consider the robust constrained Markov decision problem (RCMDP) where an agent needs to maximize the reward and satisfy the constraint against the worst possible stochastic model under the uncertainty set centered around an unknown nominal model. Primal-dual methods, effective for standard constrained MDP (CMDP), are not applicable here because of the lack of the strong duality property. Further, one cannot apply the standard robust value-iteration based approach on the composite value function either as the worst case models may be different for the reward value function and the constraint value function. We propose a novel technique that effectively minimizes the constraint value function--to satisfy the constraints; on the other hand, when all the constraints are satisfied, it can simply maximize the robust reward value function. We prove that such an algorithm finds a policy with at most $epsilon$ sub-optimality and feasible policy after $O(epsilon^{-2})$ iterations. In contrast to the state-of-the-art method, we do not need to employ a binary search, thus, we reduce the computation time by at least 4x for smaller value of discount factor ($gamma$) and by at least 6x for larger value of $gamma$.
Problem

Research questions and friction points this paper is trying to address.

Maximizing reward while satisfying constraints under model mismatch
Solving robust constrained MDPs without strong duality
Achieving efficient policy optimization with iteration guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel technique for robust constrained MDPs
Minimizes constraint value function effectively
Efficient policy optimization without binary search
🔎 Similar Papers
No similar papers found.