Local convergence of simultaneous min-max algorithms to differential equilibrium on Riemannian manifold

📅 2024-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates min-max optimization for zero-sum differential games on Riemannian manifolds, focusing on the local convergence of synchronous τ-GDA and τ-SGA algorithms near differential Stackelberg/Nash equilibria. Methodologically, it integrates Riemannian optimization, a generalized Ostrowski theorem, and spectral analysis. The contributions are threefold: (i) it establishes the first sufficient condition for linear convergence of τ-GDA on manifolds; (ii) it proposes a manifold-adapted τ-SGA that suppresses rotational dynamics via symplectic gradient adjustment, and theoretically proves its accelerated convergence to differential Stackelberg equilibria—even under large step sizes τ; (iii) experiments on orthogonal Wasserstein GAN training demonstrate substantial improvements in both stability and convergence speed over baselines.

Technology Category

Application Category

📝 Abstract
We study min-max algorithms to solve zero-sum differential games on Riemannian manifold. Based on the notions of differential Stackelberg equilibrium and differential Nash equilibrium on Riemannian manifold, we analyze the local convergence of two representative deterministic simultaneous algorithms $ au$-GDA and $ au$-SGA to such equilibria. Sufficient conditions are obtained to establish the linear convergence rate of $ au$-GDA based on the Ostrowski theorem on manifold and spectral analysis. To avoid strong rotational dynamics in $ au$-GDA, $ au$-SGA is extended from the symplectic gradient-adjustment method in Euclidean space. We analyze an asymptotic approximation of $ au$-SGA when the learning rate ratio $ au$ is big. In some cases, it can achieve a faster convergence rate to differential Stackelberg equilibrium compared to $ au$-GDA. We show numerically how the insights obtained from the convergence analysis may improve the training of orthogonal Wasserstein GANs using stochastic $ au$-GDA and $ au$-SGA on simple benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Analyzing local convergence of min-max algorithms on Riemannian manifolds.
Establishing linear convergence rates for τ-GDA using Ostrowski theorem.
Extending τ-SGA to achieve faster convergence in differential games.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Min-max algorithms on Riemannian manifold
Linear convergence rate via Ostrowski theorem
Symplectic gradient-adjustment for faster convergence
🔎 Similar Papers
No similar papers found.
S
Sixin Zhang
Université de Toulouse, INP, IRIT, 2, rue Camichel, BP 7122, 31071 Toulouse Cedex 7, France