🤖 AI Summary
Existing approaches to incremental learning for large language models (LLMs) suffer from high catastrophic forgetting, poor generalization, or inability to support true online adaptation—largely because they fail to enable real-time, progressive updates to LLMs’ core parameters. Method: We systematically survey four major paradigms—continual learning, meta-learning, parameter-efficient fine-tuning (e.g., LoRA, Adapters), and mixture-of-experts (MoE)—and identify, for the first time, that none achieve genuine real-time incremental updates to LLMs’ fundamental weight matrices. Contribution/Results: Based on this analysis, we propose the first comprehensive taxonomy for LLM incremental learning, rigorously delineating the capabilities and applicability boundaries of each paradigm. Furthermore, we articulate a forward-looking research direction that jointly prioritizes low catastrophic forgetting, strong cross-task knowledge generalization, and seamless online parameter updateability—establishing foundational principles for next-generation adaptive LLMs.
📝 Abstract
Incremental learning is the ability of systems to acquire knowledge over time, enabling their adaptation and generalization to novel tasks. It is a critical ability for intelligent, real-world systems, especially when data changes frequently or is limited. This review provides a comprehensive analysis of incremental learning in Large Language Models. It synthesizes the state-of-the-art incremental learning paradigms, including continual learning, meta-learning, parameter-efficient learning, and mixture-of-experts learning. We demonstrate their utility for incremental learning by describing specific achievements from these related topics and their critical factors. An important finding is that many of these approaches do not update the core model, and none of them update incrementally in real-time. The paper highlights current problems and challenges for future research in the field. By consolidating the latest relevant research developments, this review offers a comprehensive understanding of incremental learning and its implications for designing and developing LLM-based learning systems.