A Fisher-Rao gradient flow for entropy-regularised Markov decision processes in Polish spaces

📅 2023-10-04
🏛️ arXiv.org
📈 Citations: 11
Influential: 3
📄 PDF
🤖 AI Summary
This work studies the global convergence of the Fisher–Rao policy gradient flow for infinite-horizon entropy-regularized Markov decision processes (MDPs) on Polish spaces. Addressing the non-convex policy optimization objective, we establish, for the first time, the global well-posedness and exponential convergence of this gradient flow under the Fisher–Rao metric, along with its robustness to policy gradient estimation error. Our analysis integrates the performance difference lemma, dual-flow techniques, and entropy-regularized MDP theory to uncover intrinsic connections between the Fisher–Rao gradient flow, mirror descent, and natural policy gradients. The results provide the first strong convergence guarantee—under non-convexity—for discrete-time policy gradient algorithms such as soft Q-learning and natural policy gradient. Moreover, they fill a fundamental gap in the global convergence analysis of continuous-time policy flows over general Riemannian metric spaces.
📝 Abstract
We study the global convergence of a Fisher-Rao policy gradient flow for infinite-horizon entropy-regularised Markov decision processes with Polish state and action space. The flow is a continuous-time analogue of a policy mirror descent method. We establish the global well-posedness of the gradient flow and demonstrate its exponential convergence to the optimal policy. Moreover, we prove the flow is stable with respect to gradient evaluation, offering insights into the performance of a natural policy gradient flow with log-linear policy parameterisation. To overcome challenges stemming from the lack of the convexity of the objective function and the discontinuity arising from the entropy regulariser, we leverage the performance difference lemma and the duality relationship between the gradient and mirror descent flows. Our analysis provides a theoretical foundation for developing various discrete policy gradient algorithms.
Problem

Research questions and friction points this paper is trying to address.

Global convergence of Fisher-Rao policy gradient flow
Exponential convergence to optimal entropy-regularised policies
Stability analysis of gradient flow with log-linear parameterisation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fisher-Rao gradient flow for MDPs
Global convergence with entropy regularization
Duality between gradient and mirror descent
🔎 Similar Papers
No similar papers found.
B
B. Kerimkulov
School of Mathematics, University of Edinburgh, United Kingdom
J
J. Leahy
Department of Mathematics, Imperial College London, United Kingdom
D
D. Šiška
School of Mathematics, University of Edinburgh, United Kingdom
Lukasz Szpruch
Lukasz Szpruch
University of Edinburgh and The Alan Turing Institute
Machine learningReinforcement LearningStochastic ControlQuantitative FinanceStatistical Sampling
Y
Yufei Zhang
Department of Mathematics, Imperial College London, United Kingdom