🤖 AI Summary
To address the persistent challenge of eradicating harmful concepts (e.g., NSFW content) in text-to-image diffusion models—where such concepts resist complete removal and remain vulnerable to jailbreaking attacks—this paper proposes a training-free, weight-invariant trajectory-level guidance method. Our approach comprises two key components: (1) designing bypass-resistant refined negative prompts, and (2) introducing a localized gradient loss that dynamically steers the denoising trajectory in latent space to suppress target concepts. We pioneer the “trajectory steering” mechanism, achieving concept erasure with zero training, zero data, and zero model weight modification. Evaluated on red-teaming benchmarks and concept-removal tasks—including artistic style and object erasure—our method attains state-of-the-art performance, demonstrates robust resistance against diverse jailbreaking attacks, and enables rapid deployment for new concept erasure.
📝 Abstract
Recent advancements in text-to-image diffusion models have brought them to the public spotlight, becoming widely accessible and embraced by everyday users. However, these models have been shown to generate harmful content such as not-safe-for-work (NSFW) images. While approaches have been proposed to erase such abstract concepts from the models, jail-breaking techniques have succeeded in bypassing such safety measures. In this paper, we propose TraSCE, an approach to guide the diffusion trajectory away from generating harmful content. Our approach is based on negative prompting, but as we show in this paper, conventional negative prompting is not a complete solution and can easily be bypassed in some corner cases. To address this issue, we first propose a modification of conventional negative prompting. Furthermore, we introduce a localized loss-based guidance that enhances the modified negative prompting technique by steering the diffusion trajectory. We demonstrate that our proposed method achieves state-of-the-art results on various benchmarks in removing harmful content including ones proposed by red teams; and erasing artistic styles and objects. Our proposed approach does not require any training, weight modifications, or training data (both image or prompt), making it easier for model owners to erase new concepts.