Learning Spatial Decay for Vision Transformers

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) lack explicit spatial inductive bias in self-attention, resulting in weak spatial structure modeling. Existing spatial decay methods rely on fixed distance metrics—ignoring image content—and thus suffer from limited generalizability. To address this, we propose a data-dependent spatial decay mechanism and introduce Context-Aware Gating (CAG), the first such gating module for 2D ViTs. CAG jointly encodes spatial priors (Manhattan distance) and learnable content representations to enable dynamic, adaptive inter-patch attention modulation. This establishes a novel paradigm unifying spatial and content-aware dynamic decay. Extensive experiments on ImageNet-1K classification and generative modeling demonstrate significant improvements over strong baselines, validating the method’s effectiveness, robustness, and cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
Vision Transformers (ViTs) have revolutionized computer vision, yet their self-attention mechanism lacks explicit spatial inductive biases, leading to suboptimal performance on spatially-structured tasks. Existing approaches introduce data-independent spatial decay based on fixed distance metrics, applying uniform attention weighting regardless of image content and limiting adaptability to diverse visual scenarios. Inspired by recent advances in large language models where content-aware gating mechanisms (e.g., GLA, HGRN2, FOX) significantly outperform static alternatives, we present the first successful adaptation of data-dependent spatial decay to 2D vision transformers. We introduce extbf{Spatial Decay Transformer (SDT)}, featuring a novel Context-Aware Gating (CAG) mechanism that generates dynamic, data-dependent decay for patch interactions. Our approach learns to modulate spatial attention based on both content relevance and spatial proximity. We address the fundamental challenge of 1D-to-2D adaptation through a unified spatial-content fusion framework that integrates manhattan distance-based spatial priors with learned content representations. Extensive experiments on ImageNet-1K classification and generation tasks demonstrate consistent improvements over strong baselines. Our work establishes data-dependent spatial decay as a new paradigm for enhancing spatial attention in vision transformers.
Problem

Research questions and friction points this paper is trying to address.

ViTs lack spatial biases for structured tasks
Fixed spatial decay limits adaptability in vision
Need dynamic spatial-content fusion for 2D attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Context-Aware Gating for dynamic decay
Manhattan distance-based spatial priors
Unified spatial-content fusion framework
🔎 Similar Papers
No similar papers found.
Y
Yuxin Mao
Northwestern Polytechnical University
Z
Zhen Qin
TapTap
J
Jinxing Zhou
OpenNLPLab
B
Bin Fan
Northwestern Polytechnical University
J
Jing Zhang
Northwestern Polytechnical University
Yiran Zhong
Yiran Zhong
PhD, Australian National University
LLMSelf-supervised LearningVisual Geometry LearningNatural Language ProcessingMultimodal
Y
Yuchao Dai
Northwestern Polytechnical University