Quantization Meets Spikes: Lossless Conversion in the First Timestep via Polarity Multi-Spike Mapping

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in ANN-to-SNN conversion: difficulty achieving high accuracy within a single time step, substantial quantization-induced information loss—particularly from discarding negative activations—and sensitivity to hyperparameters. To this end, we propose Polarized Multi-Spike Mapping (PMSM), the first method to model quantization-layer information loss analytically via information entropy, enabling lossless conversion at the initial time step. PMSM incorporates an adaptive hyperparameter strategy that eliminates manual tuning, and synergistically integrates quantization-aware training with a multi-timestep dynamic spike utilization mechanism. On ViT-S, it achieves 98.5%, 89.3%, and 81.6% top-1 accuracy on CIFAR-10, CIFAR-100, and ImageNet, respectively—all within a single time step. On VGG-16, it delivers over 5× energy efficiency improvement. This is the first ANN-to-SNN conversion framework achieving simultaneous single-step inference, high accuracy, low energy consumption, and strong robustness—without information loss.

Technology Category

Application Category

📝 Abstract
Spiking neural networks (SNNs) offer advantages in computational efficiency via event-driven computing, compared to traditional artificial neural networks (ANNs). While direct training methods tackle the challenge of non-differentiable activation mechanisms in SNNs, they often suffer from high computational and energy costs during training. As a result, ANN-to-SNN conversion approach still remains a valuable and practical alternative. These conversion-based methods aim to leverage the discrete output produced by the quantization layer to obtain SNNs with low latency. Although the theoretical minimum latency is one timestep, existing conversion methods have struggled to realize such ultra-low latency without accuracy loss. Moreover, current quantization approaches often discard negative-value information following batch normalization and are highly sensitive to the hyperparameter configuration, leading to degraded performance. In this work, we, for the first time, analyze the information loss introduced by quantization layers through the lens of information entropy. Building on our analysis, we introduce Polarity Multi-Spike Mapping (PMSM) and a hyperparameter adjustment strategy tailored for the quantization layer. Our method achieves nearly lossless ANN-to-SNN conversion at the extremity, i.e., the first timestep, while also leveraging the temporal dynamics of SNNs across multiple timesteps to maintain stable performance on complex tasks. Experimental results show that our PMSM achieves state-of-the-art accuracies of 98.5% on CIFAR-10, 89.3% on CIFAR-100 and 81.6% on ImageNet with only one timestep on ViT-S architecture, establishing a new benchmark for efficient conversion. In addition, our method reduces energy consumption by over 5x under VGG-16 on CIFAR-10 and CIFAR-100, compared to the baseline method.
Problem

Research questions and friction points this paper is trying to address.

Achieving lossless ANN-to-SNN conversion with single timestep latency
Addressing information loss in quantization layers during neural conversion
Overcoming hyperparameter sensitivity in ANN-to-SNN conversion methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Polarity Multi-Spike Mapping for conversion
Hyperparameter adjustment strategy for quantization
First timestep lossless ANN-to-SNN conversion
🔎 Similar Papers
H
Hangming Zhang
College of Intelligence and Computing, Tianjin University, Tianjin, China
Z
Zheng Li
School of Future Technology, Tianjin University, Tianjin, China
Qiang Yu
Qiang Yu
Tianjin University
Computational NeuroscienceSpiking Neural NetworksArtificial IntelligenceLearning and Memory