Rehearsal-free and Task-free Online Continual Learning With Contrastive Prompt

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Online continual learning (OCL) suffers from catastrophic forgetting, while existing approaches rely either on sample replay—posing privacy risks—or task boundary assumptions—invalid in realistic streaming scenarios. This paper introduces F2OCL, the first replay-free and task-agnostic OCL paradigm. Its core innovation lies in coupling contrastive prompt learning with a nearest-class-mean (NCM) classifier: learnable prompts enhance feature discriminability, enabling secure, privacy-preserving knowledge updating from a single pass over non-stationary data streams. Crucially, F2OCL operates without storing past samples or requiring task identifiers. Evaluated on two standard benchmarks, the framework significantly mitigates forgetting and achieves state-of-the-art performance under strict constraints—zero sample storage and no task boundary information.

Technology Category

Application Category

📝 Abstract
The main challenge of continual learning is extit{catastrophic forgetting}. Because of processing data in one pass, online continual learning (OCL) is one of the most difficult continual learning scenarios. To address catastrophic forgetting in OCL, some existing studies use a rehearsal buffer to store samples and replay them in the later learning process, other studies do not store samples but assume a sequence of learning tasks so that the task identities can be explored. However, storing samples may raise data security or privacy concerns and it is not always possible to identify the boundaries between learning tasks in one pass of data processing. It motivates us to investigate rehearsal-free and task-free OCL (F2OCL). By integrating prompt learning with an NCM classifier, this study has effectively tackled catastrophic forgetting without storing samples and without usage of task boundaries or identities. The extensive experimental results on two benchmarks have demonstrated the effectiveness of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in online continual learning scenarios
Eliminates need for rehearsal buffers storing sensitive data samples
Operates without requiring task boundaries or identity information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses contrastive prompt learning technique
Integrates prompt learning with NCM classifier
Avoids storing samples and task boundaries
🔎 Similar Papers
No similar papers found.