🤖 AI Summary
To address the high energy consumption of Transformer architectures, this paper proposes an energy-efficient spiking Transformer. The core innovation is a purely additive, multiplication-free, event-driven spiking self-attention mechanism—termed Adder-First Spiking Self-Attention (A²OS²A)—which eliminates conventional softmax and scaling operations. A²OS²A integrates binary, ReLU, and ternary mixed-precision spiking neurons to jointly optimize computational accuracy and hardware efficiency. Evaluated on ImageNet-1K, the proposed architecture achieves 78.66% top-1 accuracy—significantly outperforming existing spiking Transformers—and attains state-of-the-art performance across multiple benchmark vision datasets. This work establishes a novel paradigm for low-power, brain-inspired visual modeling, advancing the feasibility of neuromorphic deep learning in resource-constrained scenarios.
📝 Abstract
Transformers have demonstrated outstanding performance across a wide range of tasks, owing to their self-attention mechanism, but they are highly energy-consuming. Spiking Neural Networks have emerged as a promising energy-efficient alternative to traditional Artificial Neural Networks, leveraging event-driven computation and binary spikes for information transfer. The combination of Transformers' capabilities with the energy efficiency of SNNs offers a compelling opportunity. This paper addresses the challenge of adapting the self-attention mechanism of Transformers to the spiking paradigm by introducing a novel approach: Accurate Addition-Only Spiking Self-Attention (A$^2$OS$^2$A). Unlike existing methods that rely solely on binary spiking neurons for all components of the self-attention mechanism, our approach integrates binary, ReLU, and ternary spiking neurons. This hybrid strategy significantly improves accuracy while preserving non-multiplicative computations. Moreover, our method eliminates the need for softmax and scaling operations. Extensive experiments show that the A$^2$OS$^2$A-based Spiking Transformer outperforms existing SNN-based Transformers on several datasets, even achieving an accuracy of 78.66% on ImageNet-1K. Our work represents a significant advancement in SNN-based Transformer models, offering a more accurate and efficient solution for real-world applications.