🤖 AI Summary
To address the limited interpretability of large text-to-image diffusion models (e.g., Flux 1), this paper introduces a visual automated interpretation framework that integrates sparse autoencoders (SAEs) with inference-time activation decomposition (ITDA), marking the first large-scale deployment of both techniques on the residual stream of such models. Methodologically, we design an end-to-end pipeline enabling precise embedding-space reconstruction and semantic parsing, while supporting SAE-based generative control—including feature injection for guided image synthesis. Key contributions are: (1) SAEs achieve significantly higher reconstruction fidelity than conventional MLP baselines, enhancing neuron-level interpretability; (2) ITDA attains explanation quality comparable to SAEs, validating its efficacy as a lightweight alternative; and (3) we establish a vision-driven automated interpretation paradigm that unifies controllable generation with mechanistic understanding.
📝 Abstract
Sparse autoencoders are a promising new approach for decomposing language model activations for interpretation and control. They have been applied successfully to vision transformer image encoders and to small-scale diffusion models. Inference-Time Decomposition of Activations (ITDA) is a recently proposed variant of dictionary learning that takes the dictionary to be a set of data points from the activation distribution and reconstructs them with gradient pursuit. We apply Sparse Autoencoders (SAEs) and ITDA to a large text-to-image diffusion model, Flux 1, and consider the interpretability of embeddings of both by introducing a visual automated interpretation pipeline. We find that SAEs accurately reconstruct residual stream embeddings and beat MLP neurons on interpretability. We are able to use SAE features to steer image generation through activation addition. We find that ITDA has comparable interpretability to SAEs.