🤖 AI Summary
Existing diffusion models lack temporal consistency during the noising/denoising process; they employ a fixed number of steps and cannot adaptively adjust to varying noise levels or data structures—particularly problematic for low-dimensional manifolds or information-sparse data, where accurate denoising termination is challenging.
Method: We propose Homogenized Diffusion Models, leveraging Doob’s *h*-transform to introduce a noise-level-dependent stochastic termination mechanism. This reformulates the diffusion process as a time-homogeneous Markov process and incorporates a first-passage-time-based stopping criterion.
Contribution/Results: Our method enables unconditional pre-trained models to be directly deployed for conditional generation and noise-robust classification—without fine-tuning. Experiments demonstrate significant improvements in sampling efficiency and generation fidelity on low-dimensional manifold data, validating both the theoretical generality and practical applicability of the framework.
📝 Abstract
We introduce a new class of generative diffusion models that, unlike conventional denoising diffusion models, achieve a time-homogeneous structure for both the noising and denoising processes, allowing the number of steps to adaptively adjust based on the noise level. This is accomplished by conditioning the forward process using Doob's $h$-transform, which terminates the process at a suitable sampling distribution at a random time. The model is particularly well suited for generating data with lower intrinsic dimensions, as the termination criterion simplifies to a first-hitting rule. A key feature of the model is its adaptability to the target data, enabling a variety of downstream tasks using a pre-trained unconditional generative model. These tasks include natural conditioning through appropriate initialization of the denoising process and classification of noisy data.