๐ค AI Summary
This paper addresses online robust reinforcement learning under dynamics mismatch between training and deployment environments, focusing on exploration challenges induced by dynamic uncertainty. We introduce the *supremal visitation ratio* to quantify discrepancies in environment dynamics and, within a distributionally robust MDP framework, propose the first efficient online algorithm achieving sublinear regret under an *f*-divergence ambiguity setโattaining optimal dependence in its regret bound. Theoretically, we establish matching upper and lower bounds on regret. Empirically, the algorithm demonstrates significant improvements over baseline methods across diverse dynamic shift scenarios, exhibiting both strong robustness and high sample efficiency.
๐ Abstract
Off-dynamics reinforcement learning (RL), where training and deployment transition dynamics are different, can be formulated as learning in a robust Markov decision process (RMDP) where uncertainties in transition dynamics are imposed. Existing literature mostly assumes access to generative models allowing arbitrary state-action queries or pre-collected datasets with a good state coverage of the deployment environment, bypassing the challenge of exploration. In this work, we study a more realistic and challenging setting where the agent is limited to online interaction with the training environment. To capture the intrinsic difficulty of exploration in online RMDPs, we introduce the supremal visitation ratio, a novel quantity that measures the mismatch between the training dynamics and the deployment dynamics. We show that if this ratio is unbounded, online learning becomes exponentially hard. We propose the first computationally efficient algorithm that achieves sublinear regret in online RMDPs with $f$-divergence based transition uncertainties. We also establish matching regret lower bounds, demonstrating that our algorithm achieves optimal dependence on both the supremal visitation ratio and the number of interaction episodes. Finally, we validate our theoretical results through comprehensive numerical experiments.