🤖 AI Summary
Text-to-image generative models are vulnerable to adversarial prompts that elicit harmful content. Existing defenses—namely, fine-tuning-based unlearning and training-free negative prompting—exhibit degraded performance when combined due to fundamental paradigm incompatibility. This work identifies an intrinsic semantic conflict between these approaches in the latent space and proposes Implicit Negative Embedding (INE): a novel framework that leverages concept inversion to map harmful concepts into transferable, latent-space negative priors—replacing explicit negative prompts without architectural modification or model retraining. INE preserves the semantic fidelity of the original prompt while enabling safe generation guidance. Evaluated on benchmarks for nudity and violence detection, INE achieves significantly higher defense success rates than prior methods. Results demonstrate its effectiveness, cross-model generalizability, and deployment efficiency—requiring no additional inference overhead or model updates.
📝 Abstract
Recent advances in text-to-image generative models have raised concerns about their potential to produce harmful content when provided with malicious input text prompts. To address this issue, two main approaches have emerged: (1) fine-tuning the model to unlearn harmful concepts and (2) training-free guidance methods that leverage negative prompts. However, we observe that combining these two orthogonal approaches often leads to marginal or even degraded defense performance. This observation indicates a critical incompatibility between two paradigms, which hinders their combined effectiveness. In this work, we address this issue by proposing a conceptually simple yet experimentally robust method: replacing the negative prompts used in training-free methods with implicit negative embeddings obtained through concept inversion. Our method requires no modification to either approach and can be easily integrated into existing pipelines. We experimentally validate its effectiveness on nudity and violence benchmarks, demonstrating consistent improvements in defense success rate while preserving the core semantics of input prompts.