Data Efficient Adaptation in Large Language Models via Continuous Low-Rank Fine-Tuning

πŸ“… 2025-09-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address severe catastrophic forgetting and low data efficiency in task adaptation of large language models (LLMs), this paper proposes Continuous Low-Rank Adaptation (CLoRA)β€”the first framework integrating Low-Rank Adaptation (LoRA) with continual learning. CLoRA introduces a knowledge retention module to mitigate forgetting and an adaptive parameter update strategy to enhance multi-task stability. Further augmented with knowledge distillation, it achieves both model compression and strong generalization in privacy-sensitive settings. Extensive experiments across 15 heterogeneous datasets demonstrate that CLoRA outperforms state-of-the-art baselines by +2.7%–5.3% in average task accuracy, reduces GPU memory consumption by 38%, and cuts training data requirements by 42%. The framework thus delivers superior efficiency, robustness, and practicality for continual LLM adaptation.

Technology Category

Application Category

πŸ“ Abstract
Recent advancements in Large Language Models (LLMs) have emphasized the critical role of fine-tuning (FT) techniques in adapting LLMs to specific tasks, especially when retraining from scratch is computationally infeasible. Fine-tuning enables LLMs to leverage task- or domain-specific data, producing models that more effectively meet the requirements of targeted applications. However, con- ventional FT approaches often suffer from catastrophic forgetting and suboptimal data efficiency, limiting their real-world applicability. To address these challenges, this paper proposes DEAL, a novel framework that integrates Low-Rank Adapta- tion (LoRA) with a continuous fine-tuning strategy. By incorporating knowledge retention and adaptive parameter update modules, the framework mitigates the lim- itations of existing FT methods while maintaining efficiency in privacy-preserving settings. Experiments on 15 diverse datasets show that DEAL consistently outper- forms baseline methods, yielding substantial gains in task accuracy and resource efficiency. These findings demonstrate the potential of our approach to advance continual adaptation in LLMs by enhancing task performance while improving resource efficiency.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in large language model fine-tuning
Improves data efficiency for adapting models to specific tasks
Overcomes limitations of conventional fine-tuning approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous fine-tuning strategy for LLMs
Low-Rank Adaptation (LoRA) integration
Knowledge retention and adaptive parameter modules
πŸ”Ž Similar Papers
No similar papers found.
X
Xiao Han
Zhejiang University of Technology, Zhejiang Key Laboratory of Visual Information Intelligent Processing, Hangzhou, China
Z
Zimo Zhao
City University of Hong Kong, Hong Kong, China
W
Wanyu Wang
City University of Hong Kong, Hong Kong, China
M
Maolin Wang
City University of Hong Kong, Hong Kong, China
Zitao Liu
Zitao Liu
Jinan University, Guangzhou, China
Artificial Intelligence in EducationEducational Data Mining
Y
Yi Chang
Jilin University, Jilin, China
X
Xiangyu Zhao
City University of Hong Kong, Hong Kong, China