🤖 AI Summary
This paper investigates min-max optimization for zero-sum differential games on Riemannian manifolds, focusing on the local convergence of synchronous τ-GDA and τ-SGA algorithms near differential Stackelberg/Nash equilibria. Methodologically, it integrates Riemannian optimization, a generalized Ostrowski theorem, and spectral analysis. The contributions are threefold: (i) it establishes the first sufficient condition for linear convergence of τ-GDA on manifolds; (ii) it proposes a manifold-adapted τ-SGA that suppresses rotational dynamics via symplectic gradient adjustment, and theoretically proves its accelerated convergence to differential Stackelberg equilibria—even under large step sizes τ; (iii) experiments on orthogonal Wasserstein GAN training demonstrate substantial improvements in both stability and convergence speed over baselines.
📝 Abstract
We study min-max algorithms to solve zero-sum differential games on Riemannian manifold. Based on the notions of differential Stackelberg equilibrium and differential Nash equilibrium on Riemannian manifold, we analyze the local convergence of two representative deterministic simultaneous algorithms $ au$-GDA and $ au$-SGA to such equilibria. Sufficient conditions are obtained to establish the linear convergence rate of $ au$-GDA based on the Ostrowski theorem on manifold and spectral analysis. To avoid strong rotational dynamics in $ au$-GDA, $ au$-SGA is extended from the symplectic gradient-adjustment method in Euclidean space. We analyze an asymptotic approximation of $ au$-SGA when the learning rate ratio $ au$ is big. In some cases, it can achieve a faster convergence rate to differential Stackelberg equilibrium compared to $ au$-GDA. We show numerically how the insights obtained from the convergence analysis may improve the training of orthogonal Wasserstein GANs using stochastic $ au$-GDA and $ au$-SGA on simple benchmarks.