Beyond Memorization: Gradient Projection Enables Selective Learning in Diffusion Models

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models tend to memorize sensitive concepts from training data, posing privacy and intellectual property risks. Existing mitigation strategies either fail to precisely suppress concept-level memorization or require data removal—leading to resource inefficiency. This paper introduces “selective learning,” a novel paradigm that dynamically orthogonally projects gradients onto the orthogonal complement of the sensitive concept embedding space during backpropagation, enabling fine-grained, semantically aware gradient-level intervention. The method preserves all training data, maintains generation quality—achieving FID and CLIP Score statistically indistinguishable from baselines—and reframes concept forgetting as a controlled learning process for the first time. Experiments demonstrate ≥92% reduction in successful extraction of sensitive attributes, alongside plug-and-play compatibility and strong adversarial robustness.

Technology Category

Application Category

📝 Abstract
Memorization in large-scale text-to-image diffusion models poses significant security and intellectual property risks, enabling adversarial attribute extraction and the unauthorized reproduction of sensitive or proprietary features. While conventional dememorization techniques, such as regularization and data filtering, limit overfitting to specific training examples, they fail to systematically prevent the internalization of prohibited concept-level features. Simply discarding all images containing a sensitive feature wastes invaluable training data, necessitating a method for selective unlearning at the concept level. To address this, we introduce a Gradient Projection Framework designed to enforce a stringent requirement of concept-level feature exclusion. Our defense operates during backpropagation by systematically identifying and excising training signals aligned with embeddings of prohibited attributes. Specifically, we project each gradient update onto the orthogonal complement of the sensitive feature's embedding space, thereby zeroing out its influence on the model's weights. Our method integrates seamlessly into standard diffusion model training pipelines and complements existing defenses. We analyze our method against an adversary aiming for feature extraction. In extensive experiments, we demonstrate that our framework drastically reduces memorization while rigorously preserving generation quality and semantic fidelity. By reframing memorization control as selective learning, our approach establishes a new paradigm for IP-safe and privacy-preserving generative AI.
Problem

Research questions and friction points this paper is trying to address.

Prevent concept-level memorization in diffusion models
Enable selective unlearning of sensitive features
Maintain generation quality while reducing security risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient projection for concept-level feature exclusion
Orthogonal complement projection to zero out sensitive influences
Selective unlearning preserving generation quality and fidelity
🔎 Similar Papers
No similar papers found.