KIF: Knowledge Identification and Fusion for Language Model Continual Learning

📅 2024-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting and inefficient knowledge transfer of prior tasks in continual learning for large language models (LLMs), this paper proposes the Knowledge Identification and Fusion (KIF) framework. KIF introduces a novel parameter-dependent skill unit partitioning mechanism, coupled with group-level knowledge importance modeling and dynamic fusion strategies, enabling fine-grained, bidirectional (forward and backward) knowledge transfer without memory replay. It is fully compatible with parameter-efficient fine-tuning (PEFT) methods such as LoRA and can be synergistically combined with replay-based approaches. Evaluated on two major continual learning benchmarks across model scales from 220M to 7B parameters, KIF consistently outperforms state-of-the-art methods, achieving an average accuracy improvement of 5.2%. It significantly mitigates forgetting while demonstrating strong generalization across tasks and model sizes.

Technology Category

Application Category

📝 Abstract
Language model continual learning (CL) has recently attracted significant interest for its ability to adapt large language models (LLMs) to dynamic real-world scenarios without retraining. A major challenge in this domain is catastrophic forgetting, where models lose previously acquired knowledge upon learning new tasks. Existing approaches commonly utilize multiple parameter-efficient fine-tuning (PEFT) blocks to acquire task-specific knowledge, yet these methods are inefficient and fail to leverage potential knowledge transfer across tasks. In this paper, we introduce a novel CL framework for language models, named Knowledge Identification and Fusion (KIF), which boosts knowledge transfer without depending on memory replay. KIF initially segregates the model into 'skill units' based on parameter dependencies, allowing for more precise control. Subsequently, it employs a novel group-wise knowledge identification technique to ascertain the importance distribution of skill units for a new task. By comparing this importance distribution with those from previous tasks, we implement a fine-grained knowledge fusion strategy that retains task-specific knowledge, thereby preventing forgetting, and updates task-shared knowledge, which facilitates bi-directional knowledge transfer. As a result, KIF achieves an optimal balance between retaining prior knowledge and excelling in new tasks. KIF also demonstrates strong generalizability, making it suitable for various base models and adaptable to PEFT methods like LoRA. Furthermore, it offers notable extensibility, supporting enhancements through integration with memory replay techniques. Comprehensive experiments conducted on two CL benchmarks, involving models ranging from 220M to 7B parameters, affirm the effectiveness of KIF and its variants across different settings.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Knowledge Retention
Transfer Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

KIF
Continual Learning
Modularization
Y
Yujie Feng
Department of Computing, The Hong Kong Polytechnic University, Hong Kong S.A.R., China
X
Xu Chu
Yongxin Xu
Yongxin Xu
Peking University
Large Language ModelsKnowledge GraphsElectronic Medical Record Analysis
Zexin Lu
Zexin Lu
Sichuan University
B
Bo Liu
Department of Computing, The Hong Kong Polytechnic University, Hong Kong S.A.R., China
Philip S. Yu
Philip S. Yu
Professor of Computer Science, University of Illinons at Chicago
Data miningDatabasePrivacy
X
Xiao-Ming Wu
Department of Computing, The Hong Kong Polytechnic University, Hong Kong S.A.R., China