🤖 AI Summary
Spiking neural networks (SNNs) suffer from performance degradation under low-bit data processing, while deep state space models (SSMs) struggle to model non-stationary temporal dynamics due to strict linear stability constraints and the absence of reset mechanisms. Method: We propose a multi-output spiking neuron model that decouples linear state-space dynamics from nonlinear feedback-based reset operations—achieving, for the first time, explicit separation among reset conditions, reset actions, and spike generation—thereby relaxing SSMs’ reliance on stable linear dynamics. Contribution/Results: The model unifies the energy efficiency of SNNs with the long-range sequence modeling capability of SSMs, supports low-bit spike encoding and multi-output representation, and achieves state-of-the-art performance on keyword spotting, event-camera vision, and sequential pattern recognition tasks—demonstrating effective learning under unstable dynamical regimes.
📝 Abstract
Neuromorphic computing is an emerging technology enabling low-latency and energy-efficient signal processing. A key algorithmic tool in neuromorphic computing is spiking neural networks (SNNs). SNNs are biologically inspired neural networks which utilize stateful neurons, and provide low-bit data processing by encoding and decoding information using spikes. Similar to SNNs, deep state-space models (SSMs) utilize stateful building blocks. However, deep SSMs, which recently achieved competitive performance in various temporal modeling tasks, are typically designed with high-precision activation functions and no reset mechanisms. To bridge the gains offered by SNNs and the recent deep SSM models, we propose a novel multiple-output spiking neuron model that combines a linear, general SSM state transition with a non-linear feedback mechanism through reset. Compared to the existing neuron models for SNNs, our proposed model clearly conceptualizes the differences between the spiking function, the reset condition and the reset action. The experimental results on various tasks, i.e., a keyword spotting task, an event-based vision task and a sequential pattern recognition task, show that our proposed model achieves performance comparable to existing benchmarks in the SNN literature. Our results illustrate how the proposed reset mechanism can overcome instability and enable learning even when the linear part of neuron dynamics is unstable, allowing us to go beyond the strictly enforced stability of linear dynamics in recent deep SSM models.