Frozen Backpropagation: Relaxing Weight Symmetry in Temporally-Coded Deep Spiking Neural Networks

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Direct training of Spiking Neural Networks (SNNs) on neuromorphic hardware suffers from high hardware overhead and energy consumption due to strict weight symmetry requirements between forward and backward passes. Method: This paper proposes Frozen Backpropagation (fBP), which decouples forward-weight updates from backward-weight constraints, eliminating symmetry. It introduces three progressive partial weight transmission mechanisms that drastically reduce transmission frequency—by up to 10⁴×—while preserving gradient fidelity. Integrated with a time-encoded SNN architecture and a staged weight-freezing–sparse-transmission strategy, fBP enables efficient on-chip learning. Contribution/Results: On CIFAR-10 and CIFAR-100, fBP incurs only 0.5% and 1.1% accuracy degradation, respectively, closely matching standard backpropagation performance. The approach establishes a scalable, energy-efficient paradigm for training SNNs on brain-inspired chips, significantly alleviating communication bottlenecks in neuromorphic systems.

Technology Category

Application Category

📝 Abstract
Direct training of Spiking Neural Networks (SNNs) on neuromorphic hardware can greatly reduce energy costs compared to GPU-based training. However, implementing Backpropagation (BP) on such hardware is challenging because forward and backward passes are typically performed by separate networks with distinct weights. To compute correct gradients, forward and feedback weights must remain symmetric during training, necessitating weight transport between the two networks. This symmetry requirement imposes hardware overhead and increases energy costs. To address this issue, we introduce Frozen Backpropagation (fBP), a BP-based training algorithm relaxing weight symmetry in settings with separate networks. fBP updates forward weights by computing gradients with periodically frozen feedback weights, reducing weight transports during training and minimizing synchronization overhead. To further improve transport efficiency, we propose three partial weight transport schemes of varying computational complexity, where only a subset of weights is transported at a time. We evaluate our methods on image recognition tasks and compare them to existing approaches addressing the weight symmetry requirement. Our results show that fBP outperforms these methods and achieves accuracy comparable to BP. With partial weight transport, fBP can substantially lower transport costs by 1,000x with an accuracy drop of only 0.5pp on CIFAR-10 and 1.1pp on CIFAR-100, or by up to 10,000x at the expense of moderated accuracy loss. This work provides insights for guiding the design of neuromorphic hardware incorporating BP-based on-chip learning.
Problem

Research questions and friction points this paper is trying to address.

Reducing energy costs in SNN training by relaxing weight symmetry
Minimizing hardware overhead from weight transport in backpropagation
Improving efficiency with partial weight transport schemes in fBP
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frozen Backpropagation relaxes weight symmetry
Partial weight transport reduces synchronization overhead
Periodically frozen feedback weights minimize energy costs
🔎 Similar Papers