🤖 AI Summary
Existing spiking neural network (SNN) training methods suffer from low temporal precision, high memory overhead, and poor hardware deployability due to reliance on discrete-time simulation, surrogate gradient approximations, and strong dependence on internal states—particularly membrane potential. This work introduces the first analytical, event-driven learning framework for SNNs, enabling exact, joint gradient computation with respect to synaptic weights, transmission delays, and adaptive firing thresholds—without accessing membrane potential or imposing discrete time steps. Grounded in biologically interpretable neuron dynamics, the method supports efficient, memory-light, end-to-end training with high temporal fidelity. Evaluated on multiple benchmark tasks, it achieves up to a 7% accuracy improvement over state-of-the-art approaches while significantly enhancing model robustness and temporal representational capacity. These advances substantially improve the practicality of SNNs for deployment on neuromorphic hardware.
📝 Abstract
Spiking neural networks inherently rely on the precise timing of discrete spike events for information processing. Incorporating additional bio-inspired degrees of freedom, such as trainable synaptic transmission delays and adaptive firing thresholds, is essential for fully leveraging the temporal dynamics of SNNs. Although recent methods have demonstrated the benefits of training synaptic weights and delays, both in terms of accuracy and temporal representation, these techniques typically rely on discrete-time simulations, surrogate gradient approximations, or full access to internal state variables such as membrane potentials. Such requirements limit training precision and efficiency and pose challenges for neuromorphic hardware implementation due to increased memory and I/O bandwidth demands. To overcome these challenges, we propose an analytical event-driven learning framework that computes exact loss gradients not only with respect to synaptic weights and transmission delays but also to adaptive neuronal firing thresholds. Experiments on multiple benchmarks demonstrate significant gains in accuracy (up to 7%), timing precision, and robustness compared to existing methods.