From Lightweight CNNs to SpikeNets: Benchmarking Accuracy-Energy Tradeoffs with Pruned Spiking SqueezeNet

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic evaluation of energy efficiency advantages of lightweight spiking neural networks (SNNs) in edge intelligence by constructing and benchmarking multiple LIF-neuron-based lightweight SNN architectures, including ShuffleNet and SqueezeNet, deployed via surrogate gradient training and SNN-to-CNN conversion frameworks. The work introduces a novel module-level structured pruning strategy that substantially enhances both accuracy and energy efficiency. Experimental results on CIFAR-10 show that the pruned SNN-SqueezeNet-P achieves only 1% lower accuracy than its CNN counterpart while reducing energy consumption by 88.1% and model parameters by 19%, and even surpasses the original unpruned SNN by 6% in accuracy. Overall, the proposed approach improves energy efficiency by up to 15.7×, effectively narrowing the performance gap between SNNs and conventional CNNs.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) are increasingly studied as energy-efficient alternatives to Convolutional Neural Networks (CNNs), particularly for edge intelligence. However, prior work has largely emphasized large-scale models, leaving the design and evaluation of lightweight CNN-to-SNN pipelines underexplored. In this paper, we present the first systematic benchmark of lightweight SNNs obtained by converting compact CNN architectures into spiking networks, where activations are modeled with Leaky-Integrate-and-Fire (LIF) neurons and trained using surrogate gradient descent under a unified setup. We construct spiking variants of ShuffleNet, SqueezeNet, MnasNet, and MixNet, and evaluate them on CIFAR-10, CIFAR-100, and TinyImageNet, measuring accuracy, F1-score, parameter count, computational complexity, and energy consumption. Our results show that SNNs can achieve up to 15.7x higher energy efficiency than their CNN counterparts while retaining competitive accuracy. Among these, the SNN variant of SqueezeNet consistently outperforms other lightweight SNNs. To further optimize this model, we apply a structured pruning strategy that removes entire redundant modules, yielding a pruned architecture, SNN-SqueezeNet-P. This pruned model improves CIFAR-10 accuracy by 6% and reduces parameters by 19% compared to the original SNN-SqueezeNet. Crucially, it narrows the gap with CNN-SqueezeNet, achieving nearly the same accuracy (only 1% lower) but with an 88.1% reduction in energy consumption due to sparse spike-driven computations. Together, these findings establish lightweight SNNs as practical, low-power alternatives for edge deployment, highlighting a viable path toward deploying high-performance, low-power intelligence on the edge.
Problem

Research questions and friction points this paper is trying to address.

Spiking Neural Networks
Lightweight CNNs
Energy Efficiency
Edge Intelligence
Accuracy-Energy Tradeoff
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spiking Neural Networks
Lightweight CNN-to-SNN Conversion
Structured Pruning
Energy Efficiency
Surrogate Gradient Descent
🔎 Similar Papers
No similar papers found.