Discrete Cosine Transform Based Decorrelated Attention for Vision Transformers

📅 2024-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two key challenges in Vision Transformers (ViTs): unstable self-attention weight initialization and computational redundancy. We propose a novel frequency-domain paradigm based on the Discrete Cosine Transform (DCT) for both initialization and compression of attention weights. To our knowledge, this is the first application of DCT to ViT attention initialization—enabling frequency-domain decoupling, energy concentration, and improved training stability. We further introduce a DCT coefficient truncation strategy to sparsify attention weights in the frequency domain, substantially reducing parameter count and FLOPs. The method integrates seamlessly with mainstream architectures such as Swin Transformer. On image classification, our DCT-based initialization accelerates convergence and improves final accuracy; the compressed DCT-Swin-T achieves over 60% weight matrix reduction and nearly 50% FLOPs reduction on ImageNet, with no accuracy loss. Our core contribution is the first unified frequency-domain framework jointly optimizing initialization and compression for ViT attention mechanisms.

Technology Category

Application Category

📝 Abstract
Central to the Transformer architectures' effectiveness is the self-attention mechanism, a function that maps queries, keys, and values into a high-dimensional vector space. However, training the attention weights of queries, keys, and values is non-trivial from a state of random initialization. In this paper, we propose two methods. (i) We first address the initialization problem of Vision Transformers by introducing a simple, yet highly innovative, initialization approach utilizing discrete cosine transform (DCT) coefficients. Our proposed DCT-based extit{attention} initialization marks a significant gain compared to traditional initialization strategies; offering a robust foundation for the attention mechanism. Our experiments reveal that the DCT-based initialization enhances the accuracy of Vision Transformers in classification tasks. (ii) We also recognize that since DCT effectively decorrelates image information in the frequency domain, this decorrelation is useful for compression because it allows the quantization step to discard many of the higher-frequency components. Based on this observation, we propose a novel DCT-based compression technique for the attention function of Vision Transformers. Since high-frequency DCT coefficients usually correspond to noise, we truncate the high-frequency DCT components of the input patches. Our DCT-based compression reduces the size of weight matrices for queries, keys, and values. While maintaining the same level of accuracy, our DCT compressed Swin Transformers obtain a considerable decrease in the computational overhead.
Problem

Research questions and friction points this paper is trying to address.

Improving Vision Transformer initialization using DCT coefficients.
Enhancing attention mechanism accuracy in classification tasks.
Reducing computational overhead via DCT-based attention compression.
Innovation

Methods, ideas, or system contributions that make the work stand out.

DCT-based attention initialization for Vision Transformers
DCT decorrelates image information for compression
Truncating high-frequency DCT components reduces computation
🔎 Similar Papers
No similar papers found.
Hongyi Pan
Hongyi Pan
Northwestern University
Signal ProcessingMachine LearningImage ProcessingFederated Learning
Emadeldeen Hamdan
Emadeldeen Hamdan
Ph.D Student, Department of Electrical and Computer Engineering, University of Illinois Chicago
Signal ProcessingData Science
X
Xin Zhu
Department of Electrical and Computer Engineering, University of Illinois Chicago Chicago, IL 60607
A
A. Cetin
Department of Electrical and Computer Engineering, University of Illinois Chicago Chicago, IL 60607
U
Ulaş Bağci
Machine and Hybrid Intelligence Lab, Northwestern University, Chicago, IL 60611