ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal diffusion Transformers (DiTs) suffer from weak representational interpretability. Method: We propose ConceptAttention—a training-free method that extracts contextualized concept embeddings directly from DiT self-attention outputs via linear projection, enabling high-fidelity, text-guided saliency mapping. Contribution/Results: We first discover that linearly mapped DiT self-attention spaces yield sharper and more semantically consistent localization maps than conventional cross-attention mechanisms. Furthermore, we empirically validate the strong zero-shot transferability of DiT representations—e.g., in Flux—for dense prediction tasks, enabling direct application to image segmentation without fine-tuning. Our approach achieves state-of-the-art zero-shot segmentation performance on ImageNet-Segmentation and the single-class subsets of PascalVOC, significantly outperforming CLIP and 11 existing interpretability methods. This work establishes a new paradigm for intrinsic DiT interpretability and multi-task generalization.

Technology Category

Application Category

📝 Abstract
Do the rich representations of multi-modal diffusion transformers (DiTs) exhibit unique properties that enhance their interpretability? We introduce ConceptAttention, a novel method that leverages the expressive power of DiT attention layers to generate high-quality saliency maps that precisely locate textual concepts within images. Without requiring additional training, ConceptAttention repurposes the parameters of DiT attention layers to produce highly contextualized concept embeddings, contributing the major discovery that performing linear projections in the output space of DiT attention layers yields significantly sharper saliency maps compared to commonly used cross-attention mechanisms. Remarkably, ConceptAttention even achieves state-of-the-art performance on zero-shot image segmentation benchmarks, outperforming 11 other zero-shot interpretability methods on the ImageNet-Segmentation dataset and on a single-class subset of PascalVOC. Our work contributes the first evidence that the representations of multi-modal DiT models like Flux are highly transferable to vision tasks like segmentation, even outperforming multi-modal foundation models like CLIP.
Problem

Research questions and friction points this paper is trying to address.

Enhance interpretability of diffusion transformers
Generate precise saliency maps for images
Improve zero-shot image segmentation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes DiT attention layers
Generates sharper saliency maps
Achieves state-of-the-art segmentation
🔎 Similar Papers
No similar papers found.