Replay4NCL: An Efficient Memory Replay-based Methodology for Neuromorphic Continual Learning in Embedded AI Systems

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high latency and energy consumption caused by memory replay in embedded neuromorphic continual learning (NCL), this paper proposes an efficient spiking neural network (SNN) continual learning framework tailored for resource-constrained scenarios. Methodologically, it introduces: (1) latent-space data compression coupled with ultra-short time-step replay to drastically reduce replay overhead; and (2) the first joint integration of tunable neuronal thresholds and dynamic learning-rate scaling in NCL, enabling co-optimization of computational efficiency and energy consumption. Evaluated on the SHD incremental learning benchmark, the framework achieves 90.43% Top-1 accuracy on previously learned tasks (+4.21% over baseline) while maintaining stable acquisition of new tasks. Compared to state-of-the-art baselines, it reduces inference latency by 4.88×, decreases latent memory footprint by 20%, and cuts energy consumption by 36.43%. This work breaks the energy-efficiency bottleneck of conventional replay-based paradigms on embedded neuromorphic hardware, establishing a deployable pathway toward low-power brain-inspired continual learning.

Technology Category

Application Category

📝 Abstract
Neuromorphic Continual Learning (NCL) paradigm leverages Spiking Neural Networks (SNNs) to enable continual learning (CL) capabilities for AI systems to adapt to dynamically changing environments. Currently, the state-of-the-art employ a memory replay-based method to maintain the old knowledge. However, this technique relies on long timesteps and compression-decompression steps, thereby incurring significant latency and energy overheads, which are not suitable for tightly-constrained embedded AI systems (e.g., mobile agents/robotics). To address this, we propose Replay4NCL, a novel efficient memory replay-based methodology for enabling NCL in embedded AI systems. Specifically, Replay4NCL compresses the latent data (old knowledge), then replays them during the NCL training phase with small timesteps, to minimize the processing latency and energy consumption. To compensate the information loss from reduced spikes, we adjust the neuron threshold potential and learning rate settings. Experimental results on the class-incremental scenario with the Spiking Heidelberg Digits (SHD) dataset show that Replay4NCL can preserve old knowledge with Top-1 accuracy of 90.43% compared to 86.22% from the state-of-the-art, while effectively learning new tasks, achieving 4.88x latency speed-up, 20% latent memory saving, and 36.43% energy saving. These results highlight the potential of our Replay4NCL methodology to further advances NCL capabilities for embedded AI systems.
Problem

Research questions and friction points this paper is trying to address.

Reduces latency and energy in neuromorphic continual learning
Improves old knowledge retention in embedded AI systems
Optimizes memory replay with small timesteps and compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compresses latent data for efficient replay
Adjusts neuron threshold and learning rate
Reduces latency and energy consumption significantly
🔎 Similar Papers
No similar papers found.
M
M. Minhas
Electrical and Communication Engineering Department, United Arab Emirates University (UAEU), Al Ain, UAE
R
Rachmad Vidya Wicaksana Putra
eBrain Lab, New York University (NYU) Abu Dhabi, Abu Dhabi, UAE
F
Falah R. Awwad
Electrical and Communication Engineering Department, United Arab Emirates University (UAEU), Al Ain, UAE
Osman Hasan
Osman Hasan
Professor of Electrical Engineering, National University of Sciences and Technology
Formal MethodsTheorem ProvingModel CheckingApproximate ComputingHardware Security.
M
M. Shafique
eBrain Lab, New York University (NYU) Abu Dhabi, Abu Dhabi, UAE