Privacy-Aware Lifelong Learning

📅 2025-05-16
🏛️ International Conference on Learning Representations
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses privacy-compliant continual learning, tackling the core challenge of *how to accurately forget knowledge of specific tasks—thereby satisfying the “right to be forgotten”—while continually acquiring new tasks, avoiding catastrophic forgetting, enabling forward knowledge transfer, and minimizing memory overhead*. We propose the first unified framework integrating continual learning with machine unlearning, featuring a task-level precise unloading mechanism: it employs task-specific sparse subnetworks for parameter isolation and sharing, augmented by lightweight episodic memory replay to jointly ensure privacy preservation, learning stability, and computational efficiency. Evaluated on multiple image classification benchmarks, our method significantly outperforms existing approaches, achieving state-of-the-art privacy-aware continual learning performance. It is the first to enable verifiable, controllable, and low-overhead task-level knowledge addition and deletion within a single neural network.

Technology Category

Application Category

📝 Abstract
Lifelong learning algorithms enable models to incrementally acquire new knowledge without forgetting previously learned information. Contrarily, the field of machine unlearning focuses on explicitly forgetting certain previous knowledge from pretrained models when requested, in order to comply with data privacy regulations on the right-to-be-forgotten. Enabling efficient lifelong learning with the capability to selectively unlearn sensitive information from models presents a critical and largely unaddressed challenge with contradicting objectives. We address this problem from the perspective of simultaneously preventing catastrophic forgetting and allowing forward knowledge transfer during task-incremental learning, while ensuring exact task unlearning and minimizing memory requirements, based on a single neural network model to be adapted. Our proposed solution, privacy-aware lifelong learning (PALL), involves optimization of task-specific sparse subnetworks with parameter sharing within a single architecture. We additionally utilize an episodic memory rehearsal mechanism to facilitate exact unlearning without performance degradations. We empirically demonstrate the scalability of PALL across various architectures in image classification, and provide a state-of-the-art solution that uniquely integrates lifelong learning and privacy-aware unlearning mechanisms for responsible AI applications.
Problem

Research questions and friction points this paper is trying to address.

Balancing lifelong learning and selective unlearning of sensitive data
Preventing catastrophic forgetting while enabling forward knowledge transfer
Ensuring exact task unlearning with minimal memory overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes task-specific sparse subnetworks with sharing
Uses episodic memory rehearsal for exact unlearning
Integrates lifelong learning and privacy-aware unlearning
🔎 Similar Papers
No similar papers found.
Ozan Özdenizci
Ozan Özdenizci
Graz University of Technology
Machine LearningArtificial Intelligence
E
Elmar Rueckert
Chair of Cyber-Physical-Systems, Montanuniversität Leoben, Austria
R
R. Legenstein
Institute of Machine Learning and Neural Computation, Graz University of Technology, Austria