🤖 AI Summary
The generalization mechanism of diffusion models beyond the training distribution remains poorly understood: although the optimal solution to denoising score matching (DSM) strictly corresponds to the score function of the training distribution, models often exhibit generative capabilities extending beyond its support.
Method: We propose a novel “variance-driven generalization” mechanism and establish a path-integral-based theoretical framework that quantifies how noise covariance modulates the variance of the DSM objective, thereby implicitly guiding the model to learn a smooth extension of the training distribution. Leveraging physics-inspired path-integral analysis, rigorous DSM theory, and analytical derivations under both under- and over-parameterized regimes, we demonstrate a coupling between noise structure and inductive bias in feature space.
Contribution/Results: Our work reveals that generalization arises from the statistical properties of the noisy objective—not merely from representational capacity—providing an interpretable, principled foundation for controllable generation.
📝 Abstract
How diffusion models generalize beyond their training set is not known, and is somewhat mysterious given two facts: the optimum of the denoising score matching (DSM) objective usually used to train diffusion models is the score function of the training distribution; and the networks usually used to learn the score function are expressive enough to learn this score to high accuracy. We claim that a certain feature of the DSM objective -- the fact that its target is not the training distribution's score, but a noisy quantity only equal to it in expectation -- strongly impacts whether and to what extent diffusion models generalize. In this paper, we develop a mathematical theory that partly explains this 'generalization through variance' phenomenon. Our theoretical analysis exploits a physics-inspired path integral approach to compute the distributions typically learned by a few paradigmatic under- and overparameterized diffusion models. We find that the distributions diffusion models effectively learn to sample from resemble their training distributions, but with 'gaps' filled in, and that this inductive bias is due to the covariance structure of the noisy target used during training. We also characterize how this inductive bias interacts with feature-related inductive biases.