🤖 AI Summary
Diffusion models (DMs) suffer from low inference efficiency, while flow matching (FM) incurs high ODE integration costs due to its Gaussian prior, which induces curved probability paths. To address these issues, this paper proposes a distribution-guided FM paradigm: it introduces a regression-based auxiliary model that learns data-adaptive priors, explicitly optimizing path curvature to yield straighter, more integrable flow trajectories; supports joint training and generation in both pixel and latent spaces. The method integrates state-of-the-art Transformer architectures, latent-space FM modeling, regression-guided prior learning, and optimized ODE solvers. Experiments demonstrate a 3.75× speedup in pixel-level generation, a 1.32× improvement in CLIP-MMD for latent-space models, and end-to-end training feasibility on consumer-grade workstations.
📝 Abstract
Enhancing the efficiency of high-quality image generation using Diffusion Models (DMs) is a significant challenge due to the iterative nature of the process. Flow Matching (FM) is emerging as a powerful generative modeling paradigm based on a simulation-free training objective instead of a score-based one used in DMs. Typical FM approaches rely on a Gaussian distribution prior, which induces curved, conditional probability paths between the prior and target data distribution. These curved paths pose a challenge for the Ordinary Differential Equation (ODE) solver, requiring a large number of inference calls to the flow prediction network. To address this issue, we present Learned Distribution-guided Flow Matching (LeDiFlow), a novel scalable method for training FM-based image generation models using a better-suited prior distribution learned via a regression-based auxiliary model. By initializing the ODE solver with a prior closer to the target data distribution, LeDiFlow enables the learning of more computationally tractable probability paths. These paths directly translate to fewer solver steps needed for high-quality image generation at inference time. Our method utilizes a State-Of-The-Art (SOTA) transformer architecture combined with latent space sampling and can be trained on a consumer workstation. We empirically demonstrate that LeDiFlow remarkably outperforms the respective FM baselines. For instance, when operating directly on pixels, our model accelerates inference by up to 3.75x compared to the corresponding pixel-space baseline. Simultaneously, our latent FM model enhances image quality on average by 1.32x in CLIP Maximum Mean Discrepancy (CMMD) metric against its respective baseline.