🤖 AI Summary
This work addresses the inverse problem of mean-field games (MFGs): reconstructing the obstacle function from partially observed value functions. To overcome the computational intractability of solving the coupled nonlinear forward–backward PDE system inherent in the forward MFG, we propose— for the first time—a decoupled framework based on policy iteration. Our method alternates between solving a linear PDE and a regularized linear inverse problem, inheriting the semantic structure of fixed-point iteration and provably achieving linear convergence. Numerical experiments in 1D and 2D using finite-difference discretization demonstrate that our approach significantly outperforms direct least-squares methods in accuracy, computational efficiency, robustness to observation noise, and scalability. To the best of our knowledge, this is the first inverse MFG solver that simultaneously offers rigorous theoretical guarantees and practical efficacy.
📝 Abstract
We propose a policy iteration method to solve an inverse problem for a mean-field game (MFG) model, specifically to reconstruct the obstacle function in the game from the partial observation data of value functions, which represent the optimal costs for agents. The proposed approach decouples this complex inverse problem, which is an optimization problem constrained by a coupled nonlinear forward and backward PDE system in the MFG, into several iterations of solving linear PDEs and linear inverse problems. This method can also be viewed as a fixed-point iteration that simultaneously solves the MFG system and inversion. We prove its linear rate of convergence. In addition, numerical examples in 1D and 2D, along with performance comparisons to a direct least-squares method, demonstrate the superior efficiency and accuracy of the proposed method for solving inverse MFGs.