Sparse Axonal and Dendritic Delays Enable Competitive SNNs for Keyword Classification

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a feedforward spiking neural network (SNN) architecture incorporating learnable axonal and dendritic delays, combined with leaky integrate-and-fire (LIF) neurons and a delay sparsification strategy, to address the high computational and memory overhead of SNNs in complex temporal tasks. The study presents the first systematic comparison between axonal and dendritic delay mechanisms, demonstrating that both achieve competitive performance even under extreme sparsity—retaining only 20% of active delays—significantly outperforming conventional synaptic delay approaches. The proposed model attains 95.58% accuracy on the Google Speech Commands dataset and 80.97% on the Spiking Speech Commands dataset, while substantially reducing buffer requirements and overall resource consumption.

Technology Category

Application Category

📝 Abstract
Training transmission delays in spiking neural networks (SNNs) has been shown to substantially improve their performance on complex temporal tasks. In this work, we show that learning either axonal or dendritic delays enables deep feedforward SNNs composed of leaky integrate-and-fire (LIF) neurons to reach accuracy comparable to existing synaptic delay learning approaches, while significantly reducing memory and computational overhead. SNN models with either axonal or dendritic delays achieve up to $95.58\%$ on the Google Speech Command (GSC) and $80.97\%$ on the Spiking Speech Command (SSC) datasets, matching or exceeding prior methods based on synaptic delays or more complex neuron models. By adjusting the delay parameters, we obtain improved performance for synaptic delay learning baselines, strengthening the comparison. We find that axonal delays offer the most favorable trade-off, combining lower buffering requirements with slightly higher accuracy than dendritic delays. We further show that the performance of axonal and dendritic delay models is largely preserved under strong delay sparsity, with as few as $20\%$ of delays remaining active, further reducing buffering requirements. Overall, our results indicate that learnable axonal and dendritic delays provide a resource-efficient and effective mechanism for temporal representation in SNNs. Code will be made available publicly upon acceptance. Code is available at https://github.com/YounesBouhadjar/AxDenSynDelaySNN
Problem

Research questions and friction points this paper is trying to address.

spiking neural networks
temporal representation
keyword classification
transmission delays
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

axonal delays
dendritic delays
spiking neural networks
delay sparsity
temporal representation
🔎 Similar Papers
No similar papers found.