C-LoRA: Continual Low-Rank Adaptation for Pre-trained Models

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address parameter redundancy, high inference overhead, and catastrophic forgetting in continual learning caused by stacking multiple LoRA adapters, this paper proposes Continual Low-Rank Adaptation (C-LoRA). Methodologically, C-LoRA introduces a learnable routing matrix to dynamically dispatch task-specific updates onto a shared low-rank subspace, while imposing orthogonality constraints to mitigate inter-task interference and enhance knowledge reuse. Crucially, C-LoRA unifies LoRA into a single, scalable continual adaptation framework—requiring no additional parameters to accommodate arbitrarily long task sequences. Empirically, it achieves state-of-the-art accuracy and parameter efficiency across multiple continual learning benchmarks. Theoretically, we analyze the pivotal role of the routing matrix in balancing knowledge retention and transfer, revealing its function as a structured gate for selective parameter sharing. This work establishes the first parameter-efficient, adapter-based continual learning method that maintains architectural simplicity without sacrificing expressivity or scalability.

Technology Category

Application Category

📝 Abstract
Low-Rank Adaptation (LoRA) is an efficient fine-tuning method that has been extensively applied in areas such as natural language processing and computer vision. Existing LoRA fine-tuning approaches excel in static environments but struggle in dynamic learning due to reliance on multiple adapter modules, increasing overhead and complicating inference. We propose Continual Low-Rank Adaptation (C-LoRA), a novel extension of LoRA for continual learning. C-LoRA uses a learnable routing matrix to dynamically manage parameter updates across tasks, ensuring efficient reuse of learned subspaces while enforcing orthogonality to minimize interference and forgetting. Unlike existing approaches that require separate adapters for each task, C-LoRA enables a integrated approach for task adaptation, achieving both scalability and parameter efficiency in sequential learning scenarios. C-LoRA achieves state-of-the-art accuracy and parameter efficiency on benchmarks while providing theoretical insights into its routing matrix's role in retaining and transferring knowledge, establishing a scalable framework for continual learning.
Problem

Research questions and friction points this paper is trying to address.

Dynamic parameter management in LoRA
Minimizing interference in continual learning
Scalable task adaptation in sequential scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic parameter routing
Integrated task adaptation
Orthogonal subspace reuse
🔎 Similar Papers
No similar papers found.
X
Xin Zhang
Institute of Intelligent Information Processing, Shanxi University, Taiyuan, 030006, China
L
Liang Bai
Institute of Intelligent Information Processing, Shanxi University, Taiyuan, 030006, China
Xian Yang
Xian Yang
University of Manchester
Artificial IntelligenceMachine LearningHealthcare AINatural Language Processing
Jiye Liang
Jiye Liang
Shanxi University