π€ AI Summary
Standard conditional flow matching (CFM) often yields generation paths deviating from the straight-line interpolation between prior and target distributions, resulting in low sampling efficiency, reduced accuracy, and reliance on fine discretization. To address this, we propose Weighted Conditional Flow Matching (W-CFM), the first CFM variant to incorporate entropy-regularized optimal transport (OT) couplings into the CFM framework. By weighting training sample pairs via a Gibbs kernel, W-CFM explicitly steers the learned vector field toward the OT geodesic. Theoretically, we prove that under large-batch training, W-CFM converges to mini-batch OTβthereby unifying the modeling fidelity of OT with the computational efficiency of CFM. Empirically, W-CFM achieves superior or competitive sample quality, fidelity, and diversity across multiple synthetic and real-world benchmarks, while retaining training and inference costs comparable to standard CFM.
π Abstract
Conditional flow matching (CFM) has emerged as a powerful framework for training continuous normalizing flows due to its computational efficiency and effectiveness. However, standard CFM often produces paths that deviate significantly from straight-line interpolations between prior and target distributions, making generation slower and less accurate due to the need for fine discretization at inference. Recent methods enhance CFM performance by inducing shorter and straighter trajectories but typically rely on computationally expensive mini-batch optimal transport (OT). Drawing insights from entropic optimal transport (EOT), we propose Weighted Conditional Flow Matching (W-CFM), a novel approach that modifies the classical CFM loss by weighting each training pair $(x, y)$ with a Gibbs kernel. We show that this weighting recovers the entropic OT coupling up to some bias in the marginals, and we provide the conditions under which the marginals remain nearly unchanged. Moreover, we establish an equivalence between W-CFM and the minibatch OT method in the large-batch limit, showing how our method overcomes computational and performance bottlenecks linked to batch size. Empirically, we test our method on unconditional generation on various synthetic and real datasets, confirming that W-CFM achieves comparable or superior sample quality, fidelity, and diversity to other alternative baselines while maintaining the computational efficiency of vanilla CFM.