Saddle-Free Guidance: Improved On-Manifold Sampling without Labels or Additional Training

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited guidance efficacy of score-based generative models in label-free, fine-tuning-free scenarios, this paper proposes Curvature-Guided Sampling (SFG). We first identify and exploit the positive curvature of the log-density function in saddle-point regions as a strong geometric guidance signal—requiring neither additional model training nor labeled data, and compatible with standard diffusion and flow-matching frameworks. SFG integrates efficient curvature estimation with manifold-aware sampling and synergistically optimizes with Auto-Guidance. On ImageNet-512, it achieves state-of-the-art FID (1.89) and FD-DINOv2 (14.2). Applied to FLUX.1-dev and Stable Diffusion v3.5, SFG significantly improves both image diversity and prompt fidelity, with computational overhead comparable to classifier-free guidance (CFG).

Technology Category

Application Category

📝 Abstract
Score-based generative models require guidance in order to generate plausible, on-manifold samples. The most popular guidance method, Classifier-Free Guidance (CFG), is only applicable in settings with labeled data and requires training an additional unconditional score-based model. More recently, Auto-Guidance adopts a smaller, less capable version of the original model to guide generation. While each method effectively promotes the fidelity of generated data, each requires labeled data or the training of additional models, making it challenging to guide score-based models when (labeled) training data are not available or training new models is not feasible. We make the surprising discovery that the positive curvature of log density estimates in saddle regions provides strong guidance for score-based models. Motivated by this, we develop saddle-free guidance (SFG) which maintains estimates of maximal positive curvature of the log density to guide individual score-based models. SFG has the same computational cost of classifier-free guidance, does not require additional training, and works with off-the-shelf diffusion and flow matching models. Our experiments indicate that SFG achieves state-of-the-art FID and FD-DINOv2 metrics in single-model unconditional ImageNet-512 generation. When SFG is combined with Auto-Guidance, its unconditional samples achieve general state-of-the-art in FD-DINOv2 score. Our experiments with FLUX.1-dev and Stable Diffusion v3.5 indicate that SFG boosts the diversity of output images compared to CFG while maintaining excellent prompt adherence and image fidelity.
Problem

Research questions and friction points this paper is trying to address.

Improves on-manifold sampling without labels or training
Guides score-based models using positive curvature estimates
Enhances image diversity and fidelity in unconditional generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses positive curvature of log density for guidance
No additional training or labeled data required
Works with off-the-shelf diffusion and flow models
🔎 Similar Papers
No similar papers found.