๐ค AI Summary
To address catastrophic forgetting and cross-task knowledge transfer in continual text classification (CTC), this paper proposes a dual-prompt space architecture: private prompts (P-Prompts) capture task-specific knowledge, while shared prompts (S-Prompts) encode task-invariant representations. We introduce the first information-theoretic complementary prompting mechanism, instantiated via two mutual information maximization objectivesโone to mitigate forgetting and the other to enhance forward transfer. The method integrates prompt learning, contrastive representation learning, and dual-stream parameterized prompt modeling, enabling sequential learning without data replay. Evaluated on multiple CTC benchmarks, our approach significantly outperforms state-of-the-art methods, effectively alleviating catastrophic forgetting and improving generalization on novel tasks.
๐ Abstract
Continual Text Classification (CTC) aims to continuously classify new text data over time while minimizing catastrophic forgetting of previously acquired knowledge. However, existing methods often focus on task-specific knowledge, overlooking the importance of shared, task-agnostic knowledge. Inspired by the complementary learning systems theory, which posits that humans learn continually through the interaction of two systems -- the hippocampus, responsible for forming distinct representations of specific experiences, and the neocortex, which extracts more general and transferable representations from past experiences -- we introduce Information-Theoretic Complementary Prompts (InfoComp), a novel approach for CTC. InfoComp explicitly learns two distinct prompt spaces: P(rivate)-Prompt and S(hared)-Prompt. These respectively encode task-specific and task-invariant knowledge, enabling models to sequentially learn classification tasks without relying on data replay. To promote more informative prompt learning, InfoComp uses an information-theoretic framework that maximizes mutual information between different parameters (or encoded representations). Within this framework, we design two novel loss functions: (1) to strengthen the accumulation of task-specific knowledge in P-Prompt, effectively mitigating catastrophic forgetting, and (2) to enhance the retention of task-invariant knowledge in S-Prompt, improving forward knowledge transfer. Extensive experiments on diverse CTC benchmarks show that our approach outperforms previous state-of-the-art methods.