Temporal Reversal Regularization for Spiking Neural Networks: Hybrid Spatio-Temporal Invariance for Generalization

📅 2024-08-17
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Spiking neural networks (SNNs) suffer from severe overfitting, limiting their generalization capability. To address this, we propose Temporal Reversal Regularization (TRR), the first method to explicitly model temporal reversibility as a regularization prior for SNNs. TRR enforces consistency between spike-rate distributions of original and temporally reversed inputs/feature sequences, while incorporating lightweight Hadamard-based feature mixing to construct spatiotemporal-invariant representations. We theoretically derive a tightened upper bound on the generalization error. Empirically, TRR significantly improves generalization and adversarial robustness across static image classification, neuromorphic event-stream recognition, and 3D point cloud classification. Notably, it achieves substantial accuracy gains in low-latency neuromorphic object recognition. By enhancing both performance and efficiency, TRR advances brain-inspired co-design of algorithms and neuromorphic hardware.

Technology Category

Application Category

📝 Abstract
Spiking neural networks (SNNs) have received widespread attention as an ultra-low power computing paradigm. Recent studies have shown that SNNs suffer from severe overfitting, which limits their generalization performance. In this paper, we propose a simple yet effective Temporal Reversal Regularization (TRR) to mitigate overfitting during training and facilitate generalization of SNNs. We exploit the inherent temporal properties of SNNs to perform input/feature temporal reversal perturbations, prompting the SNN to produce original-reversed consistent outputs and learn perturbation-invariant representations. To further enhance generalization, we utilize the lightweight ``star operation"(Hadamard product) for feature hybridization of original and temporally reversed spike firing rates, which expands the implicit dimensionality and acts as a spatio-temporal regularizer. We show theoretically that our method is able to tighten the upper bound of the generalization error, and extensive experiments on static/neuromorphic recognition as well as 3D point cloud classification tasks demonstrate its effectiveness, versatility, and adversarial robustness. In particular, our regularization significantly improves the recognition accuracy of low-latency SNN for neuromorphic objects, contributing to the real-world deployment of neuromorphic computational software-hardware integration.
Problem

Research questions and friction points this paper is trying to address.

Mitigate overfitting in Spiking Neural Networks (SNNs)
Enhance generalization via Temporal Reversal Regularization (TRR)
Improve recognition accuracy for neuromorphic object classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal Reversal Regularization mitigates overfitting
Star operation hybridizes original and reversed features
Method tightens generalization error upper bound
🔎 Similar Papers
No similar papers found.
L
Lin Zuo
School of Information and Software Engineering, University of Electronic Science and Technology of China
Y
Yongqi Ding
School of Information and Software Engineering, University of Electronic Science and Technology of China
W
Wenwei Luo
School of Information and Software Engineering, University of Electronic Science and Technology of China
Mengmeng Jing
Mengmeng Jing
University of Electronic Science and Technology of China
Machine LearningComputer VisionMultimedia
K
Kunshan Yang
School of Information and Software Engineering, University of Electronic Science and Technology of China