SPOT: Sparsification with Attention Dynamics via Token Relevance in Vision Transformers

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) suffer from quadratic computational complexity in token count due to self-attention, leading to inefficient inference. To address this, we propose a context-aware token sparsification method that dynamically models cross-layer attention and analyzes token embedding interactions to design a lightweight, plug-and-play predictor. This predictor enables input-adaptive early identification and removal of redundant tokens without modifying the backbone architecture. The approach is agnostic to ViT variants, supports seamless integration across multiple architectures, and offers resource-elastic configurability. Evaluated on ImageNet and other benchmarks, our method reduces FLOPs by up to 40% while preserving or even improving accuracy—outperforming existing token pruning techniques. Extensive experiments validate its effectiveness, interpretability, and generalizability across diverse ViT models and downstream tasks.

Technology Category

Application Category

📝 Abstract
While Vision Transformers (ViT) have demonstrated remarkable performance across diverse tasks, their computational demands are substantial, scaling quadratically with the number of processed tokens. Compact attention representations, reflecting token interaction distributions, can guide early detection and reduction of less salient tokens prior to attention computation. Motivated by this, we present SParsification with attentiOn dynamics via Token relevance (SPOT), a framework for early detection of redundant tokens within ViTs that leverages token embeddings, interactions, and attention dynamics across layers to infer token importance, resulting in a more context-aware and interpretable relevance detection process. SPOT informs token sparsification and facilitates the elimination of such tokens, improving computational efficiency without sacrificing performance. SPOT employs computationally lightweight predictors that can be plugged into various ViT architectures and learn to derive effective input-specific token prioritization across layers. Its versatile design supports a range of performance levels adaptable to varying resource constraints. Empirical evaluations demonstrate significant efficiency gains of up to 40% compared to standard ViTs, while maintaining or even improving accuracy. Code and models are available at https://github.com/odedsc/SPOT .
Problem

Research questions and friction points this paper is trying to address.

Reduces computational demands of Vision Transformers via token sparsification
Enables early detection of redundant tokens using attention dynamics
Maintains model performance while improving computational efficiency significantly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Early token sparsification using attention dynamics
Lightweight predictors for input-specific token prioritization
Maintains accuracy while improving computational efficiency
🔎 Similar Papers
No similar papers found.