Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction

๐Ÿ“… 2025-11-07
๐Ÿ“ˆ Citations: 3
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses online robust reinforcement learning under dynamics mismatch between training and deployment environments, focusing on exploration challenges induced by dynamic uncertainty. We introduce the *supremal visitation ratio* to quantify discrepancies in environment dynamics and, within a distributionally robust MDP framework, propose the first efficient online algorithm achieving sublinear regret under an *f*-divergence ambiguity setโ€”attaining optimal dependence in its regret bound. Theoretically, we establish matching upper and lower bounds on regret. Empirically, the algorithm demonstrates significant improvements over baseline methods across diverse dynamic shift scenarios, exhibiting both strong robustness and high sample efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
Off-dynamics reinforcement learning (RL), where training and deployment transition dynamics are different, can be formulated as learning in a robust Markov decision process (RMDP) where uncertainties in transition dynamics are imposed. Existing literature mostly assumes access to generative models allowing arbitrary state-action queries or pre-collected datasets with a good state coverage of the deployment environment, bypassing the challenge of exploration. In this work, we study a more realistic and challenging setting where the agent is limited to online interaction with the training environment. To capture the intrinsic difficulty of exploration in online RMDPs, we introduce the supremal visitation ratio, a novel quantity that measures the mismatch between the training dynamics and the deployment dynamics. We show that if this ratio is unbounded, online learning becomes exponentially hard. We propose the first computationally efficient algorithm that achieves sublinear regret in online RMDPs with $f$-divergence based transition uncertainties. We also establish matching regret lower bounds, demonstrating that our algorithm achieves optimal dependence on both the supremal visitation ratio and the number of interaction episodes. Finally, we validate our theoretical results through comprehensive numerical experiments.
Problem

Research questions and friction points this paper is trying to address.

Studies online reinforcement learning with mismatched training and deployment dynamics
Addresses exploration challenges without generative models or pre-collected datasets
Proposes efficient algorithm for robust MDPs with f-divergence uncertainties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online interaction with training environment dynamics
Supremal visitation ratio measures dynamics mismatch
Computationally efficient algorithm with sublinear regret
๐Ÿ”Ž Similar Papers
No similar papers found.