A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training

📅 2024-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the bottleneck of native end-to-end backpropagation (BP) training for Spiking Neural Networks (SNNs) on edge devices, this work proposes a multi-core neuromorphic architecture supporting direct BP training. Each core integrates dedicated forward-propagation, backward-propagation, and weight-gradient computation engines. We introduce a novel two-level parallelism—engine-level and core-level—and synergistically combine spike-driven sparse dataflow scheduling, mixed-precision (FP16) gradient computation, and an on-chip weight update engine. Evaluated on a 28 nm FPGA prototype, the architecture enables scalable SNN training across 20 cores and federated learning deployment across 5 nodes. It achieves 1.05 TFLOPS/W energy efficiency and reduces DRAM accesses by 55–85% compared to an NVIDIA A100 GPU. To our knowledge, this is the first hardware implementation to break the technical barrier for native SNN training at the edge.

Technology Category

Application Category

📝 Abstract
There is a growing necessity for edge training to adapt to dynamically changing environment. Neuromorphic computing represents a significant pathway for high-efficiency intelligent computation in energy-constrained edges, but existing neuromorphic architectures lack the ability of directly training spiking neural networks (SNNs) based on backpropagation. We develop a multi-core neuromorphic architecture with Feedforward-Propagation, Back-Propagation, and Weight-Gradient engines in each core, supporting high efficient parallel computing at both the engine and core levels. It combines various data flows and sparse computation optimization by fully leveraging the sparsity in SNN training, obtaining a high energy efficiency of 1.05TFLOPS/W@ FP16 @ 28nm, 55 ~ 85% reduction of DRAM access compared to A100 GPU in SNN trainings, and a 20-core deep SNN training and a 5-worker federated learning on FPGAs. Our study develops the first multi-core neuromorphic architecture supporting the direct SNN training, facilitating the neuromorphic computing in edge-learnable applications.
Problem

Research questions and friction points this paper is trying to address.

Neuromorphic Architecture
Backpropagation Training
Spiking Neural Networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multicore Neuromorphic Architecture
Sparse Spiking Neural Networks
Energy-Efficient Parallel Computing
🔎 Similar Papers
No similar papers found.
M
Mingjing Li
Peng Cheng Laboratory, Shenzhen, China
Huihui Zhou
Huihui Zhou
PengCheng Laboratory
AI
X
Xiaofeng Xu
Peng Cheng Laboratory, Shenzhen, China
Z
Zhiwei Zhong
Peng Cheng Laboratory, Shenzhen, China
P
Puli Quan
Peng Cheng Laboratory, Shenzhen, China
X
Xueke Zhu
Peng Cheng Laboratory, Shenzhen, China
Y
Yanyu Lin
Peng Cheng Laboratory, Shenzhen, China
W
Wenjie Lin
Peng Cheng Laboratory, Shenzhen, China
Hongyu Guo
Hongyu Guo
Senior Research Scientist@NRC Canada, Adjunct Professor@University of Ottawa
machine learningdeep learninggeometric generative modelgraph network
J
Junchao Zhang
Peng Cheng Laboratory, Shenzhen, China
Y
Yun-Xiang Ma
Peng Cheng Laboratory, Shenzhen, China; Southern University of Science and Technology, Shenzhen, China
W
Wei Wang
Peng Cheng Laboratory, Shenzhen, China
Q
Qingyang Meng
Peng Cheng Laboratory, Shenzhen, China
Zhengyu Ma
Zhengyu Ma
Pengcheng Laboratory
NeuroscienceNeural Network DynamicsComputational Physics
Guoqi Li
Guoqi Li
Professor, Institue of Automation,Chinese Academy of Sciences,Previously Tsinghua University
Brain inspired computingSpiking neural networksBrain inspired large modelsNeuroAI
X
Xiao-Ya Cui
Peng Cheng Laboratory, Shenzhen, China; School of Computer Science, Peking University, Beijing, China
Y
Yonghong Tian
Peng Cheng Laboratory, Shenzhen, China; School of Computer Science, Peking University, Beijing, China; School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, China