🤖 AI Summary
Vision Transformers (ViTs) suffer from high-activation “attention sink” tokens and spurious “artifact” tokens generated during inference; these tokens dynamically suppress each other via self-attention, severely disrupting information flow and introducing computational redundancy. Method: We uncover the suppression pattern and propose Fast Nyström Attention (FNA), a training-free attention approximation method. FNA models token activation norms to design a lightweight, structured masking strategy, enabling linear-complexity attention computation. Contribution/Results: FNA is the first method to transform the joint suppression mechanism between attention sinks and artifact tokens into an interpretable, deployable efficiency optimization paradigm. Evaluated across image classification, retrieval, segmentation, and visual question answering, FNA maintains original model accuracy while significantly reducing GPU memory consumption and inference latency—establishing a new efficiency benchmark for ViT inference.
📝 Abstract
Vision transformers have emerged as a powerful tool across a wide range of applications, yet their inner workings remain only partially understood. In this work, we examine the phenomenon of massive tokens - tokens with exceptionally high activation norms that act as attention sinks - and artifact tokens that emerge as a byproduct during inference. Our analysis reveals that these tokens mutually suppress one another through the attention mechanism, playing a critical role in regulating information flow within the network. Leveraging these insights, we introduce Fast Nyström Attention (FNA), a training-free method that approximates self-attention in linear time and space by exploiting the structured patterns formed by massive and artifact tokens. Additionally, we propose a masking strategy to mitigate noise from these tokens, yielding modest performance gains at virtually no cost. We evaluate our approach on popular pretrained vision backbones and demonstrate competitive performance on retrieval, classification, segmentation, and visual question answering (VQA), all while reducing computational overhead.