CASL: Concept-Aligned Sparse Latents for Interpreting Diffusion Models

📅 2026-01-21
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing sparse autoencoders struggle to align internal activations of diffusion models with human-interpretable semantic concepts, limiting semantic controllability. This work proposes a supervised alignment framework that first disentangles U-Net activations via a sparse autoencoder to obtain sparse latent variables, then learns lightweight linear mappings to associate each semantic concept with a small subset of latent dimensions. To validate the causal influence of these aligned latents, the authors introduce the CASL-Steer causal probing mechanism. As the first approach to achieve supervised alignment between latent variables and semantic concepts in diffusion models, this study also introduces the Editing Precision Ratio (EPR) as a new evaluation metric. Experiments demonstrate that the method significantly outperforms existing approaches in both semantic editing accuracy and interpretability, confirming the effectiveness and specificity of the aligned latent representations.

Technology Category

Application Category

📝 Abstract
Internal activations of diffusion models encode rich semantic information, but interpreting such representations remains challenging. While Sparse Autoencoders (SAEs) have shown promise in disentangling latent representations, existing SAE-based methods for diffusion model understanding rely on unsupervised approaches that fail to align sparse features with human-understandable concepts. This limits their ability to provide reliable semantic control over generated images. We introduce CASL (Concept-Aligned Sparse Latents), a supervised framework that aligns sparse latent dimensions of diffusion models with semantic concepts. CASL first trains an SAE on frozen U-Net activations to obtain disentangled latent representations, and then learns a lightweight linear mapping that associates each concept with a small set of relevant latent dimensions. To validate the semantic meaning of these aligned directions, we propose CASL-Steer, a controlled latent intervention that shifts activations along the learned concept axis. Unlike editing methods, CASL-Steer is used solely as a causal probe to reveal how concept-aligned latents influence generated content. We further introduce the Editing Precision Ratio (EPR), a metric that jointly measures concept specificity and the preservation of unrelated attributes. Experiments show that our method achieves superior editing precision and interpretability compared to existing approaches. To the best of our knowledge, this is the first work to achieve supervised alignment between latent representations and semantic concepts in diffusion models.
Problem

Research questions and friction points this paper is trying to address.

diffusion models
sparse autoencoders
semantic concepts
latent interpretability
concept alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept-Aligned Sparse Latents
Supervised Alignment
Sparse Autoencoders
Diffusion Models
Latent Steering
🔎 Similar Papers
No similar papers found.