🤖 AI Summary
This work proposes High-order Semantic Representation Misdirection (HiRM), a method to prevent the misuse of text-to-image diffusion models for generating harmful, privacy-sensitive, or copyrighted content by precisely erasing specific concepts without degrading the generation quality of unrelated ones. By leveraging causal tracing, HiRM identifies visual attribute representations of target concepts in early self-attention layers of the text encoder and redirects them toward random or semantic superclass directions. Only these critical layers are fine-tuned, yielding an efficient, decoupled erasure mechanism that operates independently of the denoiser. Evaluated on UnlearnCanvas and NSFW benchmarks, HiRM effectively removes diverse targets—including objects, artistic styles, and nudity—while preserving high image fidelity and incurring low training costs. Notably, the approach demonstrates zero-shot transferability to advanced architectures such as Flux.
📝 Abstract
Recent advances in text-to-image (T2I) diffusion models have seen rapid and widespread adoption. However, their powerful generative capabilities raise concerns about potential misuse for synthesizing harmful, private, or copyrighted content. To mitigate such risks, concept erasure techniques have emerged as a promising solution. Prior works have primarily focused on fine-tuning the denoising component (e.g., the U-Net backbone). However, recent causal tracing studies suggest that visual attribute information is localized in the early self-attention layers of the text encoder, indicating a potential alternative for concept erasing. Building on this insight, we conduct preliminary experiments and find that directly fine-tuning early layers can suppress target concepts but often degrades the generation quality of non-target concepts. To overcome this limitation, we propose High-Level Representation Misdirection (HiRM), which misdirects high-level semantic representations of target concepts in the text encoder toward designated vectors such as random directions or semantically defined directions (e.g., supercategories), while updating only early layers that contain causal states of visual attributes. Our decoupling strategy enables precise concept removal with minimal impact on unrelated concepts, as demonstrated by strong results on UnlearnCanvas and NSFW benchmarks across diverse targets (e.g., objects, styles, nudity). HiRM also preserves generative utility at low training cost, transfers to state-of-the-art architectures such as Flux without additional training, and shows synergistic effects with denoiser-based concept erasing methods.