SAEdit: Token-level control for continuous image editing via Sparse AutoEncoder

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current text-to-image diffusion models lack disentanglement and continuous controllability: textual prompts cannot independently or smoothly modulate individual image attributes. To address this, we propose a token-level text embedding editing method that requires no architectural modification to the diffusion model. Our core innovation is introducing a sparse autoencoder (SAE) to learn disentangled representations of CLIP text embeddings, automatically discovering semantically isolated, sparsely activated latent directions. By linearly interpolating along these directions, we enable continuous, precise, and disentangled editing of diverse attributes—including color, style, and material. Experiments demonstrate strong cross-domain generalization, intuitive user control, and significant improvements over baseline methods based on attention intervention or prompt engineering.

Technology Category

Application Category

📝 Abstract
Large-scale text-to-image diffusion models have become the backbone of modern image editing, yet text prompts alone do not offer adequate control over the editing process. Two properties are especially desirable: disentanglement, where changing one attribute does not unintentionally alter others, and continuous control, where the strength of an edit can be smoothly adjusted. We introduce a method for disentangled and continuous editing through token-level manipulation of text embeddings. The edits are applied by manipulating the embeddings along carefully chosen directions, which control the strength of the target attribute. To identify such directions, we employ a Sparse Autoencoder (SAE), whose sparse latent space exposes semantically isolated dimensions. Our method operates directly on text embeddings without modifying the diffusion process, making it model agnostic and broadly applicable to various image synthesis backbones. Experiments show that it enables intuitive and efficient manipulations with continuous control across diverse attributes and domains.
Problem

Research questions and friction points this paper is trying to address.

Achieving disentangled continuous image editing via token-level control
Overcoming text prompts' limitations in precise attribute manipulation
Identifying semantic editing directions using sparse autoencoder latent space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-level text embedding manipulation for editing
Sparse Autoencoder identifies semantic control directions
Model-agnostic approach without modifying diffusion process
🔎 Similar Papers
No similar papers found.