Non-asymptotic error bounds for probability flow ODEs under weak log-concavity

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing score-based generative models lack non-asymptotic convergence guarantees under weak log-concavity and Lipschitz continuity of the score function. Method: We derive a non-asymptotic bound on the 2-Wasserstein distance for probability flow ODE sampling, relaxing restrictive assumptions such as strong log-concavity and bounded support. Contribution/Results: We establish the first unified non-asymptotic convergence analysis framework under weak log-concavity, explicitly quantifying initialization error, score estimation error, and exponential integrator discretization error. Our theory applies to non-log-concave target distributions—including Gaussian mixtures—and provides quantitative guidance for hyperparameter selection (e.g., step size). This yields the first rigorous, general, and practically applicable non-asymptotic convergence guarantee for diffusion model sampling algorithms.

Technology Category

Application Category

📝 Abstract
Score-based generative modeling, implemented through probability flow ODEs, has shown impressive results in numerous practical settings. However, most convergence guarantees rely on restrictive regularity assumptions on the target distribution -- such as strong log-concavity or bounded support. This work establishes non-asymptotic convergence bounds in the 2-Wasserstein distance for a general class of probability flow ODEs under considerably weaker assumptions: weak log-concavity and Lipschitz continuity of the score function. Our framework accommodates non-log-concave distributions, such as Gaussian mixtures, and explicitly accounts for initialization errors, score approximation errors, and effects of discretization via an exponential integrator scheme. Bridging a key theoretical challenge in diffusion-based generative modeling, our results extend convergence theory to more realistic data distributions and practical ODE solvers. We provide concrete guarantees for the efficiency and correctness of the sampling algorithm, complementing the empirical success of diffusion models with rigorous theory. Moreover, from a practical perspective, our explicit rates might be helpful in choosing hyperparameters, such as the step size in the discretization.
Problem

Research questions and friction points this paper is trying to address.

Establishes convergence bounds for probability flow ODEs under weak assumptions
Extends theoretical guarantees to non-log-concave distributions like Gaussian mixtures
Accounts for initialization errors, score approximation, and discretization effects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-asymptotic bounds under weak log-concavity
Framework accommodates non-log-concave distributions
Accounts for initialization and discretization errors