Task-Core Memory Management and Consolidation for Long-term Continual Learning

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the exacerbated catastrophic forgetting in long-term continual learning (Long-CL) under massive, streaming task sequences. Inspired by human memory mechanisms, we propose a novel framework featuring a task-centric memory management strategy and a long-term memory consolidation mechanism—enabling adaptive indexing of critical samples and selective retention of hard-to-discriminate ones—integrated with memory indexing, dynamic updating, and selective consolidation. We systematically evaluate our approach on two self-constructed ultra-long-term benchmarks: MMLongCL-Bench (multimodal) and TextLongCL-Bench (text-only), achieving absolute improvements of 7.4% and 6.5% in average accuracy (AP), respectively, surpassing state-of-the-art methods. Our core contribution is the first formal incorporation of long-term memory consolidation modeling into continual learning, enabling synergistic optimization of knowledge acquisition efficiency and long-horizon retention capability.

Technology Category

Application Category

📝 Abstract
In this paper, we focus on a long-term continual learning (CL) task, where a model learns sequentially from a stream of vast tasks over time, acquiring new knowledge while retaining previously learned information in a manner akin to human learning. Unlike traditional CL settings, long-term CL involves handling a significantly larger number of tasks, which exacerbates the issue of catastrophic forgetting. Our work seeks to address two critical questions: 1) How do existing CL methods perform in the context of long-term CL? and 2) How can we mitigate the catastrophic forgetting that arises from prolonged sequential updates? To tackle these challenges, we propose a novel framework inspired by human memory mechanisms for long-term continual learning (Long-CL). Specifically, we introduce a task-core memory management strategy to efficiently index crucial memories and adaptively update them as learning progresses. Additionally, we develop a long-term memory consolidation mechanism that selectively retains hard and discriminative samples, ensuring robust knowledge retention. To facilitate research in this area, we construct and release two multi-modal and textual benchmarks, MMLongCL-Bench and TextLongCL-Bench, providing a valuable resource for evaluating long-term CL approaches. Experimental results show that Long-CL outperforms the previous state-of-the-art by 7.4% and 6.5% AP on the two benchmarks, respectively, demonstrating the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Evaluating existing continual learning methods in long-term scenarios
Mitigating catastrophic forgetting during prolonged sequential updates
Developing memory management and consolidation for robust knowledge retention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-core memory management for efficient indexing
Adaptive memory updates during learning progress
Selective retention of hard discriminative samples
🔎 Similar Papers
No similar papers found.
Tianyu Huai
Tianyu Huai
East China Normal University
Continual Learning
J
Jie Zhou
School of Computer Science and Technology, East China Normal University
Y
Yuxuan Cai
School of Electrical and Electronic Engineering, Nanyang Technological University
Q
Qin Chen
School of Computer Science and Technology, East China Normal University
W
Wen Wu
School of Computer Science and Technology, East China Normal University
Xingjiao Wu
Xingjiao Wu
East China Normal University
Computer VisionCrowd CountingDocument Layout AnalysisHuman-in-the-loop
X
Xipeng Qiu
Computation and Artificial Intelligence Innovative College, Fudan University
L
Liang He
School of Computer Science and Technology, East China Normal University