Learning in Markov Decision Processes with Exogenous Dynamics

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor sample efficiency of conventional reinforcement learning in Markov decision processes (MDPs) with exogenous dynamics—where certain state variables evolve independently of the agent’s actions. We formally characterize, for the first time, how the structure of exogenous states influences learning complexity, proving that the leading term of the regret bound depends solely on the size of the exogenous state space and establishing an information-theoretic matching lower bound. Leveraging this structural insight, we propose a novel algorithm tailored to exploit exogenous dynamics within a structured MDP framework. Both theoretical analysis and empirical evaluations demonstrate that our approach substantially outperforms standard methods, achieving significantly improved sample efficiency.

Technology Category

Application Category

📝 Abstract
Reinforcement learning algorithms are typically designed for generic Markov Decision Processes (MDPs), where any state-action pair can lead to an arbitrary transition distribution. In many practical systems, however, only a subset of the state variables is directly influenced by the agent's actions, while the remaining components evolve according to exogenous dynamics and account for most of the stochasticity. In this work, we study a structured class of MDPs characterized by exogenous state components whose transitions are independent of the agent's actions. We show that exploiting this structure yields significantly improved learning guarantees, with only the size of the exogenous state space appearing in the leading terms of the regret bounds. We further establish a matching lower bound, showing that this dependence is information-theoretically optimal. Finally, we empirically validate our approach across classical toy settings and real-world-inspired environments, demonstrating substantial gains in sample efficiency compared to standard reinforcement learning methods.
Problem

Research questions and friction points this paper is trying to address.

Markov Decision Processes
Exogenous Dynamics
Reinforcement Learning
Sample Efficiency
Regret Bounds
Innovation

Methods, ideas, or system contributions that make the work stand out.

exogenous dynamics
structured MDPs
regret bounds
sample efficiency
reinforcement learning
🔎 Similar Papers
No similar papers found.