TS-SNN: Temporal Shift Module for Spiking Neural Networks

πŸ“… 2025-05-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of jointly optimizing spatiotemporal feature modeling and energy efficiency in Spiking Neural Networks (SNNs), this paper proposes a lightweight Temporal Shift (TS) moduleβ€”the first to introduce learnable temporal shifting into SNNs. TS fuses spike features from past, present, and future time steps within a single timestep, while residual connections preserve information integrity. Crucially, TS introduces only one learnable parameter and incurs zero additional computational overhead. This design significantly enhances temporal dynamics modeling without compromising latency or energy efficiency. Evaluated on CIFAR-10 (96.72% accuracy), CIFAR-100 (80.28%), and ImageNet (70.61%), TS achieves state-of-the-art performance while reducing the required number of inference timesteps. The method thus delivers both high accuracy and ultra-low power consumption, establishing a novel, efficient paradigm for spatiotemporal feature fusion in event-driven neuromorphic computing.

Technology Category

Application Category

πŸ“ Abstract
Spiking Neural Networks (SNNs) are increasingly recognized for their biological plausibility and energy efficiency, positioning them as strong alternatives to Artificial Neural Networks (ANNs) in neuromorphic computing applications. SNNs inherently process temporal information by leveraging the precise timing of spikes, but balancing temporal feature utilization with low energy consumption remains a challenge. In this work, we introduce Temporal Shift module for Spiking Neural Networks (TS-SNN), which incorporates a novel Temporal Shift (TS) module to integrate past, present, and future spike features within a single timestep via a simple yet effective shift operation. A residual combination method prevents information loss by integrating shifted and original features. The TS module is lightweight, requiring only one additional learnable parameter, and can be seamlessly integrated into existing architectures with minimal additional computational cost. TS-SNN achieves state-of-the-art performance on benchmarks like CIFAR-10 (96.72%), CIFAR-100 (80.28%), and ImageNet (70.61%) with fewer timesteps, while maintaining low energy consumption. This work marks a significant step forward in developing efficient and accurate SNN architectures.
Problem

Research questions and friction points this paper is trying to address.

Balancing temporal feature utilization with low energy consumption in SNNs
Integrating past, present, and future spike features efficiently
Achieving high accuracy in SNNs with minimal computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal Shift module integrates past, present, future spikes
Residual combination prevents information loss in features
Lightweight TS module adds minimal computational cost
πŸ”Ž Similar Papers
No similar papers found.
Kairong Yu
Kairong Yu
Zhejiang University
Computer VisionMultimodal LearningSpiking Neural Network
T
Tianqing Zhang
Zhejiang University
Q
Qi Xu
Dalian University of Technology
Gang Pan
Gang Pan
Tianjin University
Computer visionMultimodalAI
H
Hongwei Wang
Zhejiang University