🤖 AI Summary
This work addresses the inverse problem of reconstructing population evolutionary dynamics from discrete-time particle snapshots. We propose iJKOnet, the first framework to embed inverse optimization into the JKO gradient-flow discretization paradigm within the space of probability measures, where the Wasserstein distance serves as the underlying metric; it jointly models both the underlying dynamics and the observation process. Unlike prior JKO-based methods, iJKOnet does not impose strong structural assumptions—such as input-convex neural networks—on the drift potential, thereby achieving both theoretical guarantees (existence and convergence of solutions) and practical flexibility via end-to-end adversarial training. Empirically, iJKOnet significantly outperforms existing JKO baselines across multiple dynamical inversion tasks, yielding substantially improved accuracy in reconstructing evolutionary trajectories.
📝 Abstract
Learning population dynamics involves recovering the underlying process that governs particle evolution, given evolutionary snapshots of samples at discrete time points. Recent methods frame this as an energy minimization problem in probability space and leverage the celebrated JKO scheme for efficient time discretization. In this work, we introduce $ exttt{iJKOnet}$, an approach that combines the JKO framework with inverse optimization techniques to learn population dynamics. Our method relies on a conventional $ extit{end-to-end}$ adversarial training procedure and does not require restrictive architectural choices, e.g., input-convex neural networks. We establish theoretical guarantees for our methodology and demonstrate improved performance over prior JKO-based methods.