🤖 AI Summary
This work addresses two key challenges in Vision Transformers (ViTs): unstable self-attention weight initialization and computational redundancy. We propose a novel frequency-domain paradigm based on the Discrete Cosine Transform (DCT) for both initialization and compression of attention weights. To our knowledge, this is the first application of DCT to ViT attention initialization—enabling frequency-domain decoupling, energy concentration, and improved training stability. We further introduce a DCT coefficient truncation strategy to sparsify attention weights in the frequency domain, substantially reducing parameter count and FLOPs. The method integrates seamlessly with mainstream architectures such as Swin Transformer. On image classification, our DCT-based initialization accelerates convergence and improves final accuracy; the compressed DCT-Swin-T achieves over 60% weight matrix reduction and nearly 50% FLOPs reduction on ImageNet, with no accuracy loss. Our core contribution is the first unified frequency-domain framework jointly optimizing initialization and compression for ViT attention mechanisms.
📝 Abstract
Central to the Transformer architectures' effectiveness is the self-attention mechanism, a function that maps queries, keys, and values into a high-dimensional vector space. However, training the attention weights of queries, keys, and values is non-trivial from a state of random initialization. In this paper, we propose two methods. (i) We first address the initialization problem of Vision Transformers by introducing a simple, yet highly innovative, initialization approach utilizing discrete cosine transform (DCT) coefficients. Our proposed DCT-based extit{attention} initialization marks a significant gain compared to traditional initialization strategies; offering a robust foundation for the attention mechanism. Our experiments reveal that the DCT-based initialization enhances the accuracy of Vision Transformers in classification tasks. (ii) We also recognize that since DCT effectively decorrelates image information in the frequency domain, this decorrelation is useful for compression because it allows the quantization step to discard many of the higher-frequency components. Based on this observation, we propose a novel DCT-based compression technique for the attention function of Vision Transformers. Since high-frequency DCT coefficients usually correspond to noise, we truncate the high-frequency DCT components of the input patches. Our DCT-based compression reduces the size of weight matrices for queries, keys, and values. While maintaining the same level of accuracy, our DCT compressed Swin Transformers obtain a considerable decrease in the computational overhead.