🤖 AI Summary
Diffusion models suffer from training data memorization, posing privacy risks and hindering generative creativity. This work is the first to theoretically characterize the fundamental trade-off between memorization and generalization from both statistical estimation and neural approximation perspectives: we prove that a non-negligible bias exists between the true score function and its empirical counterpart—arising from systematic bias in the empirical denoising loss and overfitting induced by neural network over-parameterization. Building on this analysis, we propose a structured pruning strategy tailored for diffusion Transformers, which suppresses memorization while rigorously preserving generation quality. Experiments demonstrate that our method substantially mitigates memorization—e.g., reducing training-set reconstruction rate by over 80%—without degrading FID, LPIPS, or other fidelity metrics. This work establishes the first provably grounded theoretical framework and practical algorithm for understanding and controllably suppressing memorization in diffusion models.
📝 Abstract
Diffusion models have achieved remarkable success across diverse domains, but they remain vulnerable to memorization -- reproducing training data rather than generating novel outputs. This not only limits their creative potential but also raises concerns about privacy and safety. While empirical studies have explored mitigation strategies, theoretical understanding of memorization remains limited. We address this gap through developing a dual-separation result via two complementary perspectives: statistical estimation and network approximation. From the estimation side, we show that the ground-truth score function does not minimize the empirical denoising loss, creating a separation that drives memorization. From the approximation side, we prove that implementing the empirical score function requires network size to scale with sample size, spelling a separation compared to the more compact network representation of the ground-truth score function. Guided by these insights, we develop a pruning-based method that reduces memorization while maintaining generation quality in diffusion transformers.