Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Spiking neural networks (SNNs) suffer from high training complexity—O(T)—due to sequential simulation over T timesteps, severely limiting efficiency for long temporal sequences. To address this, we propose Fixed-Point Parallel Training (FPT), a novel training paradigm that reduces complexity to O(K) (K ≈ 3) without altering network architecture. Our key innovation is the first reformulation of the Leaky Integrate-and-Fire (LIF) neuron model as a parallelizable fixed-point iteration, enabling full-timestep parallelization. We provide theoretical convergence guarantees and unify existing parallel SNN training approaches under this framework. Experiments demonstrate that FPT achieves significant training acceleration while preserving exact LIF dynamics—particularly beneficial for long-sequence tasks—and exhibits strong scalability and practical applicability.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) often suffer from high time complexity $O(T)$ due to the sequential processing of $T$ spikes, making training computationally expensive. In this paper, we propose a novel Fixed-point Parallel Training (FPT) method to accelerate SNN training without modifying the network architecture or introducing additional assumptions. FPT reduces the time complexity to $O(K)$, where $K$ is a small constant (usually $K=3$), by using a fixed-point iteration form of Leaky Integrate-and-Fire (LIF) neurons for all $T$ timesteps. We provide a theoretical convergence analysis of FPT and demonstrate that existing parallel spiking neurons can be viewed as special cases of our proposed method. Experimental results show that FPT effectively simulates the dynamics of original LIF neurons, significantly reducing computational time without sacrificing accuracy. This makes FPT a scalable and efficient solution for real-world applications, particularly for long-term tasks. Our code will be released at href{https://github.com/WanjinVon/FPT}{ exttt{https://github.com/WanjinVon/FPT}}.
Problem

Research questions and friction points this paper is trying to address.

Reducing high time complexity in Spiking Neural Networks training
Proposing Fixed-point Parallel Training for efficient SNN acceleration
Maintaining accuracy while significantly cutting computational time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fixed-point Parallel Training for SNNs
Constant time complexity O(K)
Converges without network modification
🔎 Similar Papers
No similar papers found.
W
Wanjin Feng
Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Xingyu Gao
Xingyu Gao
Professor of Computer Science, Chinese Academy of Sciences
Machine LearningComputer VisionMultimediaUbiquitous Computing
W
Wenqian Du
Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
H
Hailong Shi
Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
P
Peilin Zhao
Tencent AI Lab, Shenzhen, China
Pengcheng Wu
Pengcheng Wu
Volvo Cars / KTH Royal Institute of Technology
motion planning and control of roboticsstate estimation and uncertainty quantificationsafety
Chunyan Miao
Chunyan Miao
Nanyang Technological University
human agent interactionhuman computationcognitive agentsincentivesserious games