AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation

📅 2023-01-19
🏛️ Neural Information Processing Systems
📈 Citations: 18
Influential: 1
📄 PDF
🤖 AI Summary
Existing gradient-based attribution methods for generative large language models—particularly multimodal Transformers—suffer from high GPU memory overhead due to reliance on backpropagation, hindering practical deployment. This paper introduces a novel backpropagation-free attention perturbation paradigm: it performs token-level parallel perturbations and attention mechanism manipulation, followed by cosine-similarity-based neighborhood search in the embedding space to construct input-output relevance graphs. The method is modality-agnostic, incurs zero additional GPU memory cost, and requires no gradient computation. Evaluated on diverse text and vision-language multitask benchmarks, it achieves superior attribution fidelity compared to state-of-the-art gradient-based approaches. Crucially, its inference-time GPU memory overhead is negligible—effectively zero—enabling, for the first time, online, low-overhead interpretability analysis for large-scale generative Transformers.
📝 Abstract
Generative transformer models have become increasingly complex, with large numbers of parameters and the ability to process multiple input modalities. Current methods for explaining their predictions are resource-intensive. Most crucially, they require prohibitively large amounts of extra memory, since they rely on backpropagation which allocates almost twice as much GPU memory as the forward pass. This makes it difficult, if not impossible, to use them in production. We present AtMan that provides explanations of generative transformer models at almost no extra cost. Specifically, AtMan is a modality-agnostic perturbation method that manipulates the attention mechanisms of transformers to produce relevance maps for the input with respect to the output prediction. Instead of using backpropagation, AtMan applies a parallelizable token-based search method based on cosine similarity neighborhood in the embedding space. Our exhaustive experiments on text and image-text benchmarks demonstrate that AtMan outperforms current state-of-the-art gradient-based methods on several metrics while being computationally efficient. As such, AtMan is suitable for use in large model inference deployments.
Problem

Research questions and friction points this paper is trying to address.

Explainability
Generative Large Models
Multi-task Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

AtMan
Memory-efficient Attention Adjustment
Generative Large Models Interpretability
🔎 Similar Papers
No similar papers found.
Mayukh Deb
Mayukh Deb
PhD Student, Georgia Tech
neural networks and math
B
Bjorn Deiseroth
Aleph Alpha, Technical University Darmstadt, Hessian Center for Artificial Intelligence (hessian.AI)
S
Samuel Weinbach
Aleph Alpha
Manuel Brack
Manuel Brack
Applied Research Scientist @ Adobe | Adjunct Researcher @ hessian.AI
Machine Learning
P
P. Schramowski
Technical University Darmstadt, Hessian Center for Artificial Intelligence (hessian.AI), German Center for Artificial Intelligence (DFKI)
K
K. Kersting
Technical University Darmstadt