🤖 AI Summary
Continual learning for class-incremental semantic segmentation typically requires frequent full-model retraining, incurring prohibitive computational and memory overhead—especially under resource-constrained settings.
Method: This work introduces Low-Rank Adaptation (LoRA) to continual semantic segmentation for the first time, proposing a parameter-efficient framework that freezes the backbone network and optimizes only a small set of low-rank incremental parameters to enable cross-task knowledge transfer.
Contribution/Results: Evaluated comprehensively using NetScore—a unified metric balancing accuracy and resource efficiency—the method achieves state-of-the-art or competitive performance on multiple benchmarks while reducing trainable parameters by over 90%. It significantly lowers GPU memory consumption and FLOPs, demonstrating both practical viability and technical advancement for deployment on limited-hardware platforms.
📝 Abstract
In the past, continual learning (CL) was mostly concerned with the problem of catastrophic forgetting in neural networks, that arises when incrementally learning a sequence of tasks. Current CL methods function within the confines of limited data access, without any restrictions imposed on computational resources. However, in real-world scenarios, the latter takes precedence as deployed systems are often computationally constrained. A major drawback of most CL methods is the need to retrain the entire model for each new task. The computational demands of retraining large models can be prohibitive, limiting the applicability of CL in environments with limited resources. Through CLoRA, we explore the applicability of Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method for class-incremental semantic segmentation. CLoRA leverages a small set of parameters of the model and uses the same set for learning across all tasks. Results demonstrate the efficacy of CLoRA, achieving performance on par with and exceeding the baseline methods. We further evaluate CLoRA using NetScore, underscoring the need to factor in resource efficiency and evaluate CL methods beyond task performance. CLoRA significantly reduces the hardware requirements for training, making it well-suited for CL in resource-constrained environments after deployment.