Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation

📅 2024-10-21
🏛️ Neural Information Processing Systems
📈 Citations: 9
Influential: 3
📄 PDF
🤖 AI Summary
Diffusion models trained on internet-scale data often generate harmful content, while existing concept-erasure methods frequently compromise semantic fidelity for unrelated concepts. This paper proposes an adversarial concept-erasure framework tailored for Stable Diffusion. Our method first identifies “adversarial concepts”—those most sensitive to parameter perturbations—and explicitly preserves them during erasure. It integrates three key components: adversarial concept mining, gradient-aware parameter freezing, and conditional-guided fine-tuning. Evaluated across multiple harmful-concept removal tasks, our approach achieves state-of-the-art performance: it improves harmful-content elimination by 12.6% while retaining 98.3% fidelity for unrelated concepts. To the best of our knowledge, this is the first method to jointly optimize erasure precision and semantic integrity—effectively balancing safety and generative quality without sacrificing downstream utility.

Technology Category

Application Category

📝 Abstract
Diffusion models excel at generating visually striking content from text but can inadvertently produce undesirable or harmful content when trained on unfiltered internet data. A practical solution is to selectively removing target concepts from the model, but this may impact the remaining concepts. Prior approaches have tried to balance this by introducing a loss term to preserve neutral content or a regularization term to minimize changes in the model parameters, yet resolving this trade-off remains challenging. In this work, we propose to identify and preserving concepts most affected by parameter changes, termed as extit{adversarial concepts}. This approach ensures stable erasure with minimal impact on the other concepts. We demonstrate the effectiveness of our method using the Stable Diffusion model, showing that it outperforms state-of-the-art erasure methods in eliminating unwanted content while maintaining the integrity of other unrelated elements. Our code is available at https://github.com/tuananhbui89/Erasing-Adversarial-Preservation.
Problem

Research questions and friction points this paper is trying to address.

Selectively remove harmful concepts from diffusion models
Minimize impact on neutral content during concept removal
Balance erasure effectiveness and model integrity preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial concept identification for stable erasure
Minimizes impact on unrelated model concepts
Outperforms state-of-the-art erasure methods
🔎 Similar Papers
No similar papers found.
A
Anh-Vu Bui
Monash University
L
L. Vuong
Monash University
Khanh Doan
Khanh Doan
VinAI Research
Generative Models
Trung Le
Trung Le
Faculty of Information Technology, Monash University, Australia
Adversarial Machine LearningGenerative ModelsModel UnlearningModel EditingOptimal Transport
P
Paul Montague
Defence Science and Technology Group, Australia
T
Tamas Abraham
Defence Science and Technology Group, Australia
D
Dinh Q. Phung
Monash University