π€ AI Summary
To address the low training efficiency of spiking neural networks (SNNs) and the difficulty of effectively transferring knowledge from artificial neural networks (ANNs) into the rate-coding domain, this paper proposes an ANN-guided, end-to-end differentiable distillation framework. The method replaces SNN components with corresponding ANN modules in a block-wise manner, embedding them directly into the SNN forward passβthereby preserving intrinsic spiking dynamics while enabling progressive alignment of rate-coded feature representations. Crucially, it is the first to seamlessly integrate rate-coded backpropagation into such hybrid architectures, ensuring both gradient validity and structural consistency. Evaluated on multiple benchmark datasets, the approach significantly accelerates training convergence and improves generalization performance, consistently outperforming state-of-the-art ANN-to-SNN distillation methods.
π Abstract
Spiking Neural Networks (SNNs) have garnered considerable attention as a potential alternative to Artificial Neural Networks (ANNs). Recent studies have highlighted SNNs' potential on large-scale datasets. For SNN training, two main approaches exist: direct training and ANN-to-SNN (ANN2SNN) conversion. To fully leverage existing ANN models in guiding SNN learning, either direct ANN-to-SNN conversion or ANN-SNN distillation training can be employed. In this paper, we propose an ANN-SNN distillation framework from the ANN-to-SNN perspective, designed with a block-wise replacement strategy for ANN-guided learning. By generating intermediate hybrid models that progressively align SNN feature spaces to those of ANN through rate-based features, our framework naturally incorporates rate-based backpropagation as a training method. Our approach achieves results comparable to or better than state-of-the-art SNN distillation methods, showing both training and learning efficiency.