🤖 AI Summary
This work addresses a theoretical gap concerning the generalization performance of semi-dual adversarial solvers in neural optimal transport (Neural OT). Specifically, it investigates the approximation of OT maps under quadratic cost—a core setting in practical Neural OT applications.
Method: The analysis integrates the semi-dual formulation of OT, an adversarial minimax optimization framework, and Rademacher complexity theory tailored to neural network hypothesis classes.
Contribution/Results: We derive the first statistically rigorous upper bound on the generalization error of semi-dual adversarial Neural OT solvers. The bound depends solely on standard statistical quantities—such as network capacity (e.g., spectral norm, depth, width) and sample size—and is both interpretable and extendable to broader OT variants. This result provides the first formal statistical guarantee for Neural OT methods and establishes a foundational framework for generalization analysis of generalized OT formulations.
📝 Abstract
Neural network based Optimal Transport (OT) is a recent and fruitful direction in the generative modeling community. It finds its applications in various fields such as domain translation, image super-resolution, computational biology and others. Among the existing approaches to OT, of considerable interest are adversarial minimax solvers based on semi-dual formulations of OT problems. While promising, these methods lack theoretical investigation from a statistical learning perspective. Our work fills this gap by establishing upper bounds on the generalization error of an approximate OT map recovered by the minimax quadratic OT solver. Importantly, the bounds we derive depend solely on some standard statistical and mathematical properties of the considered functional classes (neural networks). While our analysis focuses on the quadratic OT, we believe that similar bounds could be derived for more general OT formulations, paving the promising direction for future research.