Distillation-Guided Structural Transfer for Continual Learning Beyond Sparse Distributed Memory

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sparse neural networks suffer from rigid modular architectures that impede cross-task knowledge reuse, leading to substantial performance degradation in continual learning under high sparsity. To address this, we propose Selective Subnetwork Distillation (SSD), the first framework to reformulate knowledge distillation as topology-aligned information channeling—rather than conventional regularization—enabling structured knowledge transfer without replay or task identity labels. SSD dynamically selects reusable subnetworks based on activation frequency and jointly optimizes sparse topology via structural alignment distillation and logits distillation. Evaluated on Split CIFAR-10/100 and MNIST, SSD significantly improves classification accuracy, memory retention, and representation coverage, effectively overcoming the performance bottleneck of continual learning under high sparsity.

Technology Category

Application Category

📝 Abstract
Sparse neural systems are gaining traction for efficient continual learning due to their modularity and low interference. Architectures such as Sparse Distributed Memory Multi-Layer Perceptrons (SDMLP) construct task-specific subnetworks via Top-K activation and have shown resilience against catastrophic forgetting. However, their rigid modularity limits cross-task knowledge reuse and leads to performance degradation under high sparsity. We propose Selective Subnetwork Distillation (SSD), a structurally guided continual learning framework that treats distillation not as a regularizer but as a topology-aligned information conduit. SSD identifies neurons with high activation frequency and selectively distills knowledge within previous Top-K subnetworks and output logits, without requiring replay or task labels. This enables structural realignment while preserving sparse modularity. Experiments on Split CIFAR-10, CIFAR-100, and MNIST demonstrate that SSD improves accuracy, retention, and representation coverage, offering a structurally grounded solution for sparse continual learning.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in sparse neural systems for continual learning.
Enables cross-task knowledge reuse without replay or task labels.
Improves accuracy and retention in high sparsity scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective Subnetwork Distillation for knowledge transfer
Distillation as topology-aligned information conduit
No replay or task labels required for learning
🔎 Similar Papers
No similar papers found.
H
Huiyan Xue
School of Computer Science and Technology, Dalian University of Technology, China
Xuming Ran
Xuming Ran
National University of Singapore
Generative modelVisual cortex computationMemory modellingContinual learningAI for Science.
Yaxin Li
Yaxin Li
School of Computer Science and Technology, Dalian University of Technology, China
Q
Qi Xu
School of Computer Science and Technology, Dalian University of Technology, China
E
Enhui Li
School of Computer Science and Technology, Dalian University of Technology, China
Y
Yi Xu
School of Computer Science and Technology, Dalian University of Technology, China
Q
Qiang Zhang
School of Computer Science and Technology, Dalian University of Technology, China