🤖 AI Summary
This paper addresses planning in probabilistic graphical models, focusing on the selection mechanisms and fundamental performance limits of inference methods. Method: It introduces a novel “planning-as-variational-inference” perspective, establishing for the first time an exact correspondence between planning objectives and variational entropy weights—thereby decoupling inference type from approximation technique. It proves that classical approaches (e.g., marginal inference, MAP) are fundamentally effective only under low environmental stochasticity and provides a unified characterization of their applicability boundaries. Building on this analysis, the paper proposes a factorized-state MDP planning framework tailored to loopy belief propagation, enabling scalable approximate planning in high-dimensional state spaces. Contribution/Results: The theoretical predictions are empirically validated on synthetic MDPs and benchmark tasks from international planning competitions, demonstrating both accuracy of the analysis and superior algorithmic performance.
📝 Abstract
Multiple types of inference are available for probabilistic graphical models, e.g., marginal, maximum-a-posteriori, and even marginal maximum-a-posteriori. Which one do researchers mean when they talk about"planning as inference"? There is no consistency in the literature, different types are used, and their ability to do planning is further entangled with specific approximations or additional constraints. In this work we use the variational framework to show that, just like all commonly used types of inference correspond to different weightings of the entropy terms in the variational problem, planning corresponds exactly to a different set of weights. This means that all the tricks of variational inference are readily applicable to planning. We develop an analogue of loopy belief propagation that allows us to perform approximate planning in factored-state Markov decisions processes without incurring intractability due to the exponentially large state space. The variational perspective shows that the previous types of inference for planning are only adequate in environments with low stochasticity, and allows us to characterize each type by its own merits, disentangling the type of inference from the additional approximations that its practical use requires. We validate these results empirically on synthetic MDPs and tasks posed in the International Planning Competition.