🤖 AI Summary
Sparse neural networks suffer from rigid modular architectures that impede cross-task knowledge reuse, leading to substantial performance degradation in continual learning under high sparsity. To address this, we propose Selective Subnetwork Distillation (SSD), the first framework to reformulate knowledge distillation as topology-aligned information channeling—rather than conventional regularization—enabling structured knowledge transfer without replay or task identity labels. SSD dynamically selects reusable subnetworks based on activation frequency and jointly optimizes sparse topology via structural alignment distillation and logits distillation. Evaluated on Split CIFAR-10/100 and MNIST, SSD significantly improves classification accuracy, memory retention, and representation coverage, effectively overcoming the performance bottleneck of continual learning under high sparsity.
📝 Abstract
Sparse neural systems are gaining traction for efficient continual learning due to their modularity and low interference. Architectures such as Sparse Distributed Memory Multi-Layer Perceptrons (SDMLP) construct task-specific subnetworks via Top-K activation and have shown resilience against catastrophic forgetting. However, their rigid modularity limits cross-task knowledge reuse and leads to performance degradation under high sparsity. We propose Selective Subnetwork Distillation (SSD), a structurally guided continual learning framework that treats distillation not as a regularizer but as a topology-aligned information conduit. SSD identifies neurons with high activation frequency and selectively distills knowledge within previous Top-K subnetworks and output logits, without requiring replay or task labels. This enables structural realignment while preserving sparse modularity. Experiments on Split CIFAR-10, CIFAR-100, and MNIST demonstrate that SSD improves accuracy, retention, and representation coverage, offering a structurally grounded solution for sparse continual learning.