Prompt-Based Safety Guidance Is Ineffective for Unlearned Text-to-Image Diffusion Models

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image generative models are vulnerable to adversarial prompts that elicit harmful content. Existing defenses—namely, fine-tuning-based unlearning and training-free negative prompting—exhibit degraded performance when combined due to fundamental paradigm incompatibility. This work identifies an intrinsic semantic conflict between these approaches in the latent space and proposes Implicit Negative Embedding (INE): a novel framework that leverages concept inversion to map harmful concepts into transferable, latent-space negative priors—replacing explicit negative prompts without architectural modification or model retraining. INE preserves the semantic fidelity of the original prompt while enabling safe generation guidance. Evaluated on benchmarks for nudity and violence detection, INE achieves significantly higher defense success rates than prior methods. Results demonstrate its effectiveness, cross-model generalizability, and deployment efficiency—requiring no additional inference overhead or model updates.

Technology Category

Application Category

📝 Abstract
Recent advances in text-to-image generative models have raised concerns about their potential to produce harmful content when provided with malicious input text prompts. To address this issue, two main approaches have emerged: (1) fine-tuning the model to unlearn harmful concepts and (2) training-free guidance methods that leverage negative prompts. However, we observe that combining these two orthogonal approaches often leads to marginal or even degraded defense performance. This observation indicates a critical incompatibility between two paradigms, which hinders their combined effectiveness. In this work, we address this issue by proposing a conceptually simple yet experimentally robust method: replacing the negative prompts used in training-free methods with implicit negative embeddings obtained through concept inversion. Our method requires no modification to either approach and can be easily integrated into existing pipelines. We experimentally validate its effectiveness on nudity and violence benchmarks, demonstrating consistent improvements in defense success rate while preserving the core semantics of input prompts.
Problem

Research questions and friction points this paper is trying to address.

Text-to-image models generate harmful content from malicious prompts
Combining fine-tuning and prompt-based safety methods reduces defense effectiveness
Incompatibility between unlearning and guidance paradigms limits safety performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses implicit negative embeddings from concept inversion
Replaces negative prompts in training-free guidance methods
Integrates seamlessly without modifying existing model pipelines
🔎 Similar Papers
No similar papers found.
J
Jiwoo Shin
KAIST
Byeonghu Na
Byeonghu Na
KAIST
Generative ModelDiffusion Model
M
Mina Kang
KAIST
W
Wonhyeok Choi
KAIST
I
Il-chul Moon
KAIST, summary.ai