Provable Separations between Memorization and Generalization in Diffusion Models

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models suffer from training data memorization, posing privacy risks and hindering generative creativity. This work is the first to theoretically characterize the fundamental trade-off between memorization and generalization from both statistical estimation and neural approximation perspectives: we prove that a non-negligible bias exists between the true score function and its empirical counterpart—arising from systematic bias in the empirical denoising loss and overfitting induced by neural network over-parameterization. Building on this analysis, we propose a structured pruning strategy tailored for diffusion Transformers, which suppresses memorization while rigorously preserving generation quality. Experiments demonstrate that our method substantially mitigates memorization—e.g., reducing training-set reconstruction rate by over 80%—without degrading FID, LPIPS, or other fidelity metrics. This work establishes the first provably grounded theoretical framework and practical algorithm for understanding and controllably suppressing memorization in diffusion models.

Technology Category

Application Category

📝 Abstract
Diffusion models have achieved remarkable success across diverse domains, but they remain vulnerable to memorization -- reproducing training data rather than generating novel outputs. This not only limits their creative potential but also raises concerns about privacy and safety. While empirical studies have explored mitigation strategies, theoretical understanding of memorization remains limited. We address this gap through developing a dual-separation result via two complementary perspectives: statistical estimation and network approximation. From the estimation side, we show that the ground-truth score function does not minimize the empirical denoising loss, creating a separation that drives memorization. From the approximation side, we prove that implementing the empirical score function requires network size to scale with sample size, spelling a separation compared to the more compact network representation of the ground-truth score function. Guided by these insights, we develop a pruning-based method that reduces memorization while maintaining generation quality in diffusion transformers.
Problem

Research questions and friction points this paper is trying to address.

Theoretical understanding of memorization in diffusion models remains limited
Memorization limits creative potential and raises privacy concerns
Separation exists between memorization and generalization in diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statistical estimation reveals score function separation
Network approximation shows size-sample scaling requirement
Pruning method reduces memorization while preserving quality
🔎 Similar Papers
No similar papers found.
Z
Zeqi Ye
Department of Industrial Engineering and Management Sciences, Northwestern University
Q
Qijie Zhu
Department of Statistics and Data Science, Northwestern University
Molei Tao
Molei Tao
Associate Professor, Georgia Institute of Technology
foundation of machine learningapplied & computational mathstochastic/nonlinear dynamics
Minshuo Chen
Minshuo Chen
Northwestern University
Diffusion ModelReinforcement LearningGenerative Modeling