Channel-wise Parallelizable Spiking Neuron with Multiplication-free Dynamics and Large Temporal Receptive Fields

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of slow training, high hardware overhead, and poor deployability of Spiking Neural Networks (SNNs) on resource-constrained devices for long-sequence modeling, this paper proposes a hardware-friendly SNN architecture. Our method introduces four key innovations: (1) a channel-wise parallelized dynamic neuron model enabling large temporal receptive fields; (2) channel-wise convolution to enhance spatiotemporal feature extraction; (3) a sawtooth dilation mechanism that reduces computational complexity; and (4) fully shift-based arithmetic operations replacing multiplications, significantly lowering computational cost and power consumption. Evaluated on benchmark neuromorphic datasets—SHD, CIFAR-seq, and DVS-Lip—our approach achieves state-of-the-art accuracy while accelerating training by 2.1–3.8× and reducing memory footprint by 37%–52%. These results demonstrate substantial improvements in both efficiency and hardware deployability.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) are distinguished from Artificial Neural Networks (ANNs) for their sophisticated neuronal dynamics and sparse binary activations (spikes) inspired by the biological neural system. Traditional neuron models use iterative step-by-step dynamics, resulting in serial computation and slow training speed of SNNs. Recently, parallelizable spiking neuron models have been proposed to fully utilize the massive parallel computing ability of graphics processing units to accelerate the training of SNNs. However, existing parallelizable spiking neuron models involve dense floating operations and can only achieve high long-term dependencies learning ability with a large order at the cost of huge computational and memory costs. To solve the dilemma of performance and costs, we propose the mul-free channel-wise Parallel Spiking Neuron, which is hardware-friendly and suitable for SNNs' resource-restricted application scenarios. The proposed neuron imports the channel-wise convolution to enhance the learning ability, induces the sawtooth dilations to reduce the neuron order, and employs the bit shift operation to avoid multiplications. The algorithm for design and implementation of acceleration methods is discussed meticulously. Our methods are validated in neuromorphic Spiking Heidelberg Digits voices, sequential CIFAR images, and neuromorphic DVS-Lip vision datasets, achieving the best accuracy among SNNs. Training speed results demonstrate the effectiveness of our acceleration methods, providing a practical reference for future research.
Problem

Research questions and friction points this paper is trying to address.

Spiking Neural Networks
Long Temporal Sequence
Computational Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-multiplicative Channel-wise Parallel Spiking Neuron
resource efficiency
Spiking Neural Networks (SNNs) acceleration
P
Peng Xue
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences; Peng Cheng Laboratory
W
Wei Fang
School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University
Zhengyu Ma
Zhengyu Ma
Pengcheng Laboratory
NeuroscienceNeural Network DynamicsComputational Physics
Z
Zihan Huang
School of Computer Science, Peking University
Z
Zhaokun Zhou
Peng Cheng Laboratory; School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University
Y
Yonghong Tian
Peng Cheng Laboratory; School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University; School of Computer Science, Peking University
T
T. Masquelier
Centre de Recherche Cerveau et Cognition (CERCO), UMR5549 CNRS–Université Toulouse 3
Huihui Zhou
Huihui Zhou
PengCheng Laboratory
AI