ReReLRP - Remembering and Recognizing Tasks with LRP

πŸ“… 2025-02-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Deep neural networks suffer from catastrophic forgetting in continual learningβ€”i.e., significant performance degradation on previously learned tasks upon acquiring new ones. This paper proposes a novel replay-free, memory-module-free continual learning method that dynamically identifies and preserves task-critical knowledge pathways across layers via Layer-wise Relevance Propagation (LRP). To our knowledge, this is the first work to leverage LRP for task-aware knowledge consolidation and incremental critical-path identification, inherently offering interpretability, strict privacy preservation (zero raw-data storage), and architecture-agnostic compatibility. Evaluated on multiple standard benchmarks, our approach matches the accuracy of state-of-the-art rehearsal-based methods while substantially reducing computational overhead and memory footprint, and supports plug-and-play integration with arbitrary mainstream architectures. Key contributions include: (1) data-free knowledge retention, (2) high memory efficiency, (3) provable zero privacy leakage, (4) universal architectural compatibility, and (5) native integration of model interpretability.

Technology Category

Application Category

πŸ“ Abstract
Deep neural networks have revolutionized numerous research fields and applications. Despite their widespread success, a fundamental limitation known as catastrophic forgetting remains, where models fail to retain their ability to perform previously learned tasks after being trained on new ones. This limitation is particularly acute in certain continual learning scenarios, where models must integrate the knowledge from new domains with their existing capabilities. Traditional approaches to mitigate this problem typically rely on memory replay mechanisms, storing either original data samples, prototypes, or activation patterns. Although effective, these methods often introduce significant computational overhead, raise privacy concerns, and require the use of dedicated architectures. In this work we present ReReLRP (Remembering and Recognizing with LRP), a novel solution that leverages Layerwise Relevance Propagation (LRP) to preserve information across tasks. Our contribution provides increased privacy of existing replay-free methods while additionally offering built-in explainability, flexibility of model architecture and deployment, and a new mechanism to increase memory storage efficiency. We validate our approach on a wide variety of datasets, demonstrating results comparable with a well-known replay-based method in selected scenarios.
Problem

Research questions and friction points this paper is trying to address.

Mitigate catastrophic forgetting in neural networks
Enhance privacy in replay-free continual learning
Increase memory storage efficiency using LRP
Innovation

Methods, ideas, or system contributions that make the work stand out.

LRP for task memory retention
Enhanced privacy in replay-free methods
Explainable and flexible model architecture
K
Karolina Bogacka
Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland; NeverBlink, Warsaw, Poland
M
Maximilian Hofler
Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
Maria Ganzha
Maria Ganzha
Associate Professor Warsaw University of Technology
Agent-based computingMultiagent systemdistributed systemOntologySemantic Data Processing
Wojciech Samek
Wojciech Samek
Professor at TU Berlin, Head of AI Department at Fraunhofer HHI, BIFOLD Fellow
Deep LearningInterpretabilityExplainable AITrustworthy AIFederated Learning
K
Katarzyna Wasielewska-Michniewska
Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland