🤖 AI Summary
Edge devices operating in open-world environments face persistent challenges in continual learning due to data distribution shifts and emergent novel classes, while conventional offline training paradigms fail to meet stringent constraints on power efficiency and real-time inference. To address this, we propose CLP-SNN—a spiking neural network architecture designed for replay-free online continual learning on the Intel Loihi 2 neuromorphic processor. Our approach innovatively integrates event-driven spatiotemporal sparse local learning, self-normalizing three-factor synaptic plasticity, and a synergistic neurogenesis–metaplasticity mechanism to enable dynamic model capacity expansion and mitigate catastrophic forgetting. Evaluated on the OpenLORIS few-shot benchmark, CLP-SNN achieves accuracy competitive with replay-based methods, while attaining an ultra-low inference latency of 0.33 ms (70× faster) and energy consumption of only 0.05 mJ per inference (5600× lower). These results decisively break the traditional accuracy–efficiency trade-off in edge continual learning.
📝 Abstract
AI systems on edge devices face a critical challenge in open-world environments: adapting when data distributions shift and novel classes emerge. While offline training dominates current paradigms, online continual learning (OCL)--where models learn incrementally from non-stationary streams without catastrophic forgetting--remains challenging in power-constrained settings. We present a neuromorphic solution called CLP-SNN: a spiking neural network architecture for Continually Learning Prototypes and its implementation on Intel's Loihi 2 chip. Our approach introduces three innovations: (1) event-driven and spatiotemporally sparse local learning, (2) a self-normalizing three-factor learning rule maintaining weight normalization, and (3) integrated neurogenesis and metaplasticity for capacity expansion and forgetting mitigation. On OpenLORIS few-shot learning experiments, CLP-SNN achieves accuracy competitive with replay methods while being rehearsal-free. CLP-SNN delivers transformative efficiency gains: 70 imes faster (0.33ms vs 23.2ms), and 5,600 imes more energy efficient (0.05mJ vs 281mJ) than the best alternative OCL on edge GPU. This demonstrates that co-designed brain-inspired algorithms and neuromorphic hardware can break traditional accuracy-efficiency trade-offs for future edge AI systems.