Neural Estimation for Scaling Entropic Multimarginal Optimal Transport

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-marginal optimal transport (MOT) effectively models joint structures across multiple distributions, but standard entropy-regularized Sinkhorn algorithms incur prohibitive O(nᵏ) time complexity, severely limiting scalability in large-scale machine learning. Method: We propose NEMOT—the first scalable framework that integrates neural implicit potential estimation into entropy-regularized MOT. It parameterizes dual potentials via deep networks and combines mini-batch stochastic optimization with multi-marginal Sinkhorn iterations, reducing computational complexity from O(nᵏ) to batch-size-dependent cost. Contribution/Results: Theoretically, we derive non-asymptotic error bounds and show natural extension to multi-marginal Gromov–Wasserstein alignment. Empirically, NEMOT achieves order-of-magnitude speedups, scales to significantly larger sample sizes and numbers of marginals, and supports end-to-end integration into large-scale ML pipelines.

Technology Category

Application Category

📝 Abstract
Multimarginal optimal transport (MOT) is a powerful framework for modeling interactions between multiple distributions, yet its applicability is bottlenecked by a high computational overhead. Entropic regularization provides computational speedups via the multimarginal Sinkhorn algorithm, whose time complexity, for a dataset size $n$ and $k$ marginals, generally scales as $O(n^k)$. However, this dependence on the dataset size $n$ is computationally prohibitive for many machine learning problems. In this work, we propose a new computational framework for entropic MOT, dubbed Neural Entropic MOT (NEMOT), that enjoys significantly improved scalability. NEMOT employs neural networks trained using mini-batches, which transfers the computational complexity from the dataset size to the size of the mini-batch, leading to substantial gains. We provide formal guarantees on the accuracy of NEMOT via non-asymptotic error bounds. We supplement these with numerical results that demonstrate the performance gains of NEMOT over Sinkhorn's algorithm, as well as extensions to neural computation of multimarginal entropic Gromov-Wasserstein alignment. In particular, orders-of-magnitude speedups are observed relative to the state-of-the-art, with a notable increase in the feasible number of samples and marginals. NEMOT seamlessly integrates as a module in large-scale machine learning pipelines, and can serve to expand the practical applicability of entropic MOT for tasks involving multimarginal data.
Problem

Research questions and friction points this paper is trying to address.

Scaling entropic multimarginal optimal transport computationally
Reducing dataset size dependency in Sinkhorn algorithm
Enhancing speed and feasibility for multimarginal tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural networks replace dataset size complexity
Mini-batch training reduces computational overhead
Non-asymptotic error bounds ensure accuracy
🔎 Similar Papers
No similar papers found.