Overcoming the Limitations of Layer Synchronization in Spiking Neural Networks

📅 2024-08-09
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
The fundamental mismatch between layer-synchronous training and asynchronous hardware execution in spiking neural networks (SNNs)—where models rely on layer-wise synchronization during training yet must operate efficiently on synchronization-free asynchronous neuromorphic hardware—hinders joint optimization of performance, energy efficiency, and latency. To address this, we propose *unlayered backpropagation*, a novel training paradigm that decouples neuronal updates from rigid layer-wise structural constraints, enabling flexible event-driven scheduling and compatibility with diverse asynchronous spike-triggering policies. Evaluated on an event-driven simulation platform, our approach reduces spike density by 50%, halves decision latency, and improves classification accuracy by up to 10% over conventional layer-synchronous SNNs. This work achieves, for the first time, concurrent optimization of high accuracy, low power consumption, and rapid response—establishing a scalable, training-deployment consistent framework for asynchronous brain-inspired computing.

Technology Category

Application Category

📝 Abstract
Currently, neural-network processing in machine learning applications relies on layer synchronization, whereby neurons in a layer aggregate incoming currents from all neurons in the preceding layer, before evaluating their activation function. This is practiced even in artificial Spiking Neural Networks (SNNs), which are touted as consistent with neurobiology, in spite of processing in the brain being, in fact asynchronous. A truly asynchronous system however would allow all neurons to evaluate concurrently their threshold and emit spikes upon receiving any presynaptic current. Omitting layer synchronization is potentially beneficial, for latency and energy efficiency, but asynchronous execution of models previously trained with layer synchronization may entail a mismatch in network dynamics and performance. We present a study that documents and quantifies this problem in three datasets on our simulation environment that implements network asynchrony, and we show that models trained with layer synchronization either perform sub-optimally in absence of the synchronization, or they will fail to benefit from any energy and latency reduction, when such a mechanism is in place. We then"make ends meet"and address the problem with unlayered backprop, a novel backpropagation-based training method, for learning models suitable for asynchronous processing. We train with it models that use different neuron execution scheduling strategies, and we show that although their neurons are more reactive, these models consistently exhibit lower overall spike density (up to 50%), reach a correct decision faster (up to 2x) without integrating all spikes, and achieve superior accuracy (up to 10% higher). Our findings suggest that asynchronous event-based (neuromorphic) AI computing is indeed more efficient, but we need to seriously rethink how we train our SNN models, to benefit from it.
Problem

Research questions and friction points this paper is trying to address.

Identifies performance mismatch in asynchronous SNN execution
Quantifies limitations of layer synchronization in neural networks
Explores training methods for efficient asynchronous spike processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized backpropagation training for asynchronous scheduling
Asynchronous neuron execution reduces spikes by 50%
Asynchronous processing doubles inference speed and improves accuracy
🔎 Similar Papers
No similar papers found.