Monte Carlo Planning for Stochastic Control on Constrained Markov Decision Processes

📅 2024-06-23
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional Markov Decision Processes (MDPs) neglect the dynamic causal structure underlying transition and reward functions, leading to suboptimal resource allocation. Method: We propose the State-Decision MDP (SD-MDP) framework, the first to explicitly model and exploit hierarchical causal constraints—encoded via causal graphs—on transition and reward functions, enabling causal disentanglement and compact representation of state-decision dynamics. Theoretical contributions include a formal SD-MDP formulation, a Monte Carlo value estimation error bound, and a simple regret bound for Monte Carlo Tree Search (MCTS). Methodologically, SD-MDP integrates causal graph modeling, constrained MDPs, and sample-based policy planning. Results: In a ship bunkering optimization case study, the SD-MDP-guided MCTS policy reduces operational costs by 12.7% under identical simulation budgets, empirically validating both the tightness of our theoretical bounds and the practical efficacy of the framework.

Technology Category

Application Category

📝 Abstract
In the world of stochastic control, especially in economics and engineering, Markov Decision Processes (MDPs) can effectively model various stochastic decision processes, from asset management to transportation optimization. These underlying MDPs, upon closer examination, often reveal a specifically constrained causal structure concerning the transition and reward dynamics. By exploiting this structure, we can obtain a reduction in the causal representation of the problem setting, allowing us to solve of the optimal value function more efficiently. This work defines an MDP framework, the exttt{SD-MDP}, where we disentangle the causal structure of MDPs' transition and reward dynamics, providing distinct partitions on the temporal causal graph. With this stochastic reduction, the exttt{SD-MDP} reflects a general class of resource allocation problems. This disentanglement further enables us to derive theoretical guarantees on the estimation error of the value function under an optimal policy by allowing independent value estimation from Monte Carlo sampling. Subsequently, by integrating this estimator into well-known Monte Carlo planning algorithms, such as Monte Carlo Tree Search (MCTS), we derive bounds on the simple regret of the algorithm. Finally, we quantify the policy improvement of MCTS under the exttt{SD-MDP} framework by demonstrating that the MCTS planning algorithm achieves higher expected reward (lower costs) under a constant simulation budget, on a tangible economic example based on maritime refuelling.
Problem

Research questions and friction points this paper is trying to address.

Incorporates causal structure into MDPs for resource allocation problems
Reduces sequential optimization to fractional knapsack with log-linear complexity
Enables efficient Monte Carlo planning in high-dimensional state spaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages causal disentanglement for MDP decomposition
Reduces optimization to fractional knapsack with O(T log T)
Integrates with Monte Carlo Tree Search for efficiency
🔎 Similar Papers
No similar papers found.