🤖 AI Summary
To address the high parameter count and computational cost of Spiking Transformers (STs), hindering their deployment in resource-constrained settings, this work proposes an efficient sparsification framework integrating synaptic pruning with a collaborative learning compensation mechanism. We introduce two customized pruning strategies: L1P (unstructured pruning based on L1-norm) and DSP (structured low-rank pruning). Furthermore, we adopt the synergistic Leaky Integrate-and-Fire (sLIF) neuron model to jointly optimize synaptic and intrinsic plasticity, enabling dynamic compensation for accuracy degradation induced by pruning. Extensive experiments across multiple benchmark datasets demonstrate that our approach significantly reduces model parameters and inference energy consumption while preserving state-of-the-art accuracy. These results validate the effectiveness of the pruning-compensation co-design paradigm for building efficient Spiking Transformers.
📝 Abstract
As a foundational architecture of artificial intelligence models, Transformer has been recently adapted to spiking neural networks with promising performance across various tasks. However, existing spiking Transformer (ST)-based models require a substantial number of parameters and incur high computational costs, thus limiting their deployment in resource-constrained environments. To address these challenges, we propose combining synapse pruning with a synergistic learning-based compensation strategy to derive lightweight ST-based models. Specifically, two types of tailored pruning strategies are introduced to reduce redundancy in the weight matrices of ST blocks: an unstructured $mathrm{L_{1}P}$ method to induce sparse representations, and a structured DSP method to induce low-rank representations. In addition, we propose an enhanced spiking neuron model, termed the synergistic leaky integrate-and-fire (sLIF) neuron, to effectively compensate for model pruning through synergistic learning between synaptic and intrinsic plasticity mechanisms. Extensive experiments on benchmark datasets demonstrate that the proposed methods significantly reduce model size and computational overhead while maintaining competitive performance. These results validate the effectiveness of the proposed pruning and compensation strategies in constructing efficient and high-performing ST-based models.