🤖 AI Summary
Continual learning (CL) in deep neural networks incurs excessive computational overhead, high memory footprint, and severe forgetting of prior tasks—challenging deployment on resource-constrained embedded systems.
Method: This work pioneers the systematic integration of spiking neural networks’ (SNNs) intrinsic sparsity with CL theory, proposing an SNN-based CL framework co-optimizing energy efficiency, accuracy, and latency. It unifies replay, regularization, and dynamic architecture expansion, and incorporates brain-inspired hardware mapping with fine-grained energy modeling.
Contribution/Results: We establish the first unified evaluation benchmark for SNN-based CL. Experiments across multiple benchmarks demonstrate 30–60% energy reduction versus ANN-based CL baselines, while constraining backward transfer degradation to ≤5%. Furthermore, we identify critical deployment bottlenecks under realistic constraints, offering a novel paradigm for low-power, autonomous edge intelligence.
📝 Abstract
To adapt to real-world dynamics, intelligent systems need to assimilate new knowledge without catastrophic forgetting, where learning new tasks leads to a degradation in performance on old tasks. To address this, continual learning concept is proposed for enabling autonomous systems to acquire new knowledge and dynamically adapt to changing environments. Specifically, energy-efficient continual learning is needed to ensure the functionality of autonomous systems under tight compute and memory resource budgets (i.e., so-called autonomous embedded systems). Neuromorphic computing, with brain-inspired Spiking Neural Networks (SNNs), offers inherent advantages for enabling low-power/energy continual learning in autonomous embedded systems. In this paper, we comprehensively discuss the foundations and methods for enabling continual learning in neural networks, then analyze the state-of-the-art works considering SNNs. Afterward, comparative analyses of existing methods are conducted while considering crucial design factors, such as network complexity, memory, latency, and power/energy efficiency. We also explore the practical applications that can benefit from SNN-based continual learning and open challenges in real-world scenarios. In this manner, our survey provides valuable insights into the recent advancements of SNN-based continual learning for real-world application use-cases.