A Policy Iteration Method for Inverse Mean Field Games

📅 2024-09-10
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inverse problem of mean-field games (MFGs): reconstructing the obstacle function from partially observed value functions. To overcome the computational intractability of solving the coupled nonlinear forward–backward PDE system inherent in the forward MFG, we propose— for the first time—a decoupled framework based on policy iteration. Our method alternates between solving a linear PDE and a regularized linear inverse problem, inheriting the semantic structure of fixed-point iteration and provably achieving linear convergence. Numerical experiments in 1D and 2D using finite-difference discretization demonstrate that our approach significantly outperforms direct least-squares methods in accuracy, computational efficiency, robustness to observation noise, and scalability. To the best of our knowledge, this is the first inverse MFG solver that simultaneously offers rigorous theoretical guarantees and practical efficacy.

Technology Category

Application Category

📝 Abstract
We propose a policy iteration method to solve an inverse problem for a mean-field game (MFG) model, specifically to reconstruct the obstacle function in the game from the partial observation data of value functions, which represent the optimal costs for agents. The proposed approach decouples this complex inverse problem, which is an optimization problem constrained by a coupled nonlinear forward and backward PDE system in the MFG, into several iterations of solving linear PDEs and linear inverse problems. This method can also be viewed as a fixed-point iteration that simultaneously solves the MFG system and inversion. We prove its linear rate of convergence. In addition, numerical examples in 1D and 2D, along with performance comparisons to a direct least-squares method, demonstrate the superior efficiency and accuracy of the proposed method for solving inverse MFGs.
Problem

Research questions and friction points this paper is trying to address.

Reconstruct obstacle function in MFG from partial observations
Decouple inverse MFG into linear PDEs and inverse problems
Prove linear convergence and demonstrate superior efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Policy iteration for inverse mean-field games
Decouples nonlinear PDEs into linear steps
Proves linear convergence with numerical validation
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security
N
Nathan Soedjak
Department of Applied Physics and Applied Mathematics, Columbia University, New York, NY 10027
S
Shanyin Tong
Department of Applied Physics and Applied Mathematics, Columbia University, New York, NY 10027