π€ AI Summary
We address the sequential maintenance decision problem for large-scale, multi-component monotonic partially observable Markov decision processes (POMDPs) under budget constraintsβa challenge where the state space grows exponentially with the number of components, rendering conventional methods intractable. We propose a two-stage decoupling framework: first, we approximate the value function of each individual component via random forest regression, enabling efficient global budget allocation; second, we employ the optimal policy of the corresponding fully observable MDP as an oracle to guide meta-reinforcement learning (using PPO) for solving each subproblem independently. This is the first work to jointly integrate monotonicity modeling, value-function approximation, and oracle-guided meta-training, achieving scalable and high-accuracy solutions for systems with hundreds of components. Experiments demonstrate significant improvements over baselines in budget utilization and policy interpretability, with validation on real-world building maintenance tasks confirming both practical utility and robustness.
π Abstract
Monotonic Partially Observable Markov Decision Processes (POMDPs), where the system state progressively decreases until a restorative action is performed, can be used to model sequential repair problems effectively. This paper considers the problem of solving budget-constrained multi-component monotonic POMDPs, where a finite budget limits the maximal number of restorative actions. For a large number of components, solving such a POMDP using current methods is computationally intractable due to the exponential growth in the state space with an increasing number of components. To address this challenge, we propose a two-step approach. Since the individual components of a budget-constrained multi-component monotonic POMDP are only connected via the shared budget, we first approximate the optimal budget allocation among these components using an approximation of each component POMDP's optimal value function which is obtained through a random forest model. Subsequently, we introduce an oracle-guided meta-trained Proximal Policy Optimization (PPO) algorithm to solve each of the independent budget-constrained single-component monotonic POMDPs. The oracle policy is obtained by performing value iteration on the corresponding monotonic Markov Decision Process (MDP). This two-step method provides scalability in solving truly massive multi-component monotonic POMDPs. To demonstrate the efficacy of our approach, we consider a real-world maintenance scenario that involves inspection and repair of an administrative building by a team of agents within a maintenance budget. Finally, we perform a computational complexity analysis for a varying number of components to show the scalability of the proposed approach.