Spiking Transformer:Introducing Accurate Addition-Only Spiking Self-Attention for Transformer

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high energy consumption of Transformer architectures, this paper proposes an energy-efficient spiking Transformer. The core innovation is a purely additive, multiplication-free, event-driven spiking self-attention mechanism—termed Adder-First Spiking Self-Attention (A²OS²A)—which eliminates conventional softmax and scaling operations. A²OS²A integrates binary, ReLU, and ternary mixed-precision spiking neurons to jointly optimize computational accuracy and hardware efficiency. Evaluated on ImageNet-1K, the proposed architecture achieves 78.66% top-1 accuracy—significantly outperforming existing spiking Transformers—and attains state-of-the-art performance across multiple benchmark vision datasets. This work establishes a novel paradigm for low-power, brain-inspired visual modeling, advancing the feasibility of neuromorphic deep learning in resource-constrained scenarios.

Technology Category

Application Category

📝 Abstract
Transformers have demonstrated outstanding performance across a wide range of tasks, owing to their self-attention mechanism, but they are highly energy-consuming. Spiking Neural Networks have emerged as a promising energy-efficient alternative to traditional Artificial Neural Networks, leveraging event-driven computation and binary spikes for information transfer. The combination of Transformers' capabilities with the energy efficiency of SNNs offers a compelling opportunity. This paper addresses the challenge of adapting the self-attention mechanism of Transformers to the spiking paradigm by introducing a novel approach: Accurate Addition-Only Spiking Self-Attention (A$^2$OS$^2$A). Unlike existing methods that rely solely on binary spiking neurons for all components of the self-attention mechanism, our approach integrates binary, ReLU, and ternary spiking neurons. This hybrid strategy significantly improves accuracy while preserving non-multiplicative computations. Moreover, our method eliminates the need for softmax and scaling operations. Extensive experiments show that the A$^2$OS$^2$A-based Spiking Transformer outperforms existing SNN-based Transformers on several datasets, even achieving an accuracy of 78.66% on ImageNet-1K. Our work represents a significant advancement in SNN-based Transformer models, offering a more accurate and efficient solution for real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Adapting Transformer self-attention to spiking neural networks
Improving accuracy with hybrid binary, ReLU, and ternary spiking neurons
Eliminating softmax and scaling operations for energy efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid binary, ReLU, ternary spiking neurons
Eliminates softmax and scaling operations
Accurate Addition-Only Spiking Self-Attention
🔎 Similar Papers
No similar papers found.
Yufei Guo
Yufei Guo
Engineer
nerual networks
X
Xiaode Liu
Intelligent Science & Technology Academy of CASIC, China
Yuanpei Chen
Yuanpei Chen
South China University of Technology
Robotic
W
Weihang Peng
Intelligent Science & Technology Academy of CASIC, China
Y
Yuhan Zhang
Intelligent Science & Technology Academy of CASIC, China
Zhe Ma
Zhe Ma
Intelligent Science & Technology Academy of CASIC, China