🤖 AI Summary
Neural ordinary differential equations (Neural ODEs) struggle to rigorously approximate dynamic optimal transport flows.
Method: We propose the first provably convergent Neural ODE construction framework, integrating optimal transport theory, nonlinear control analysis, and asymptotic consistency proofs to design Neural ODE architectures satisfying the Kantorovich dynamic optimality conditions.
Contribution/Results: We establish uniform convergence of the model’s solution to the true dynamic optimal transport flow in the continuous-time limit. This work bridges control theory, optimal transport, and Neural ODEs via rigorous convergence guarantees—resolving a long-standing open problem in their unified modeling. Moreover, it introduces a new paradigm for interpretable and verifiable neural dynamical modeling, grounded in sound theoretical foundations. The framework ensures both mathematical fidelity to the underlying transport dynamics and architectural compatibility with differentiable programming, enabling principled integration of physical priors into deep learning-based dynamical systems.
📝 Abstract
From the perspective of control theory, neural differential equations (neural ODEs) have become an important tool for supervised learning. In the fundamental work of Ruiz-Balet and Zuazua (SIAM REVIEW 2023), the authors pose an open problem regarding the connection between control theory, optimal transport theory, and neural differential equations. More precisely, they inquire how one can quantify the closeness of the optimal flows in neural transport equations to the true dynamic optimal transport. In this work, we propose a construction of neural differential equations that converge to the true dynamic optimal transport in the limit, providing a significant step in solving the formerly mentioned open problem.