CoLoR-GAN: Continual Few-Shot Learning with Low-Rank Adaptation in Generative Adversarial Networks

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting and parameter explosion in few-shot continual learning (FS-CL) for generative adversarial networks (GANs), this paper proposes CoLoR-GAN. The method introduces LoRA-in-LoRA (LLoRA), a nested low-rank adapter architecture applied to convolutional layers, which drastically reduces adapter size via hierarchical low-rank decomposition. It further conducts systematic empirical analysis of LoRA hyperparameters to enhance scalability and adaptability. Crucially, CoLoR-GAN achieves efficient parameter updates and cross-task knowledge retention without introducing task-specific weights. Evaluated on multiple FS-CL benchmarks, it attains state-of-the-art performance while using approximately 62% fewer parameters than LFS-GAN. This demonstrates superior parameter efficiency, training stability, and generalization capability—establishing a new trade-off frontier for lightweight, continual generative modeling.

Technology Category

Application Category

📝 Abstract
Continual learning (CL) in the context of Generative Adversarial Networks (GANs) remains a challenging problem, particularly when it comes to learn from a few-shot (FS) samples without catastrophic forgetting. Current most effective state-of-the-art (SOTA) methods, like LFS-GAN, introduce a non-negligible quantity of new weights at each training iteration, which would become significant when considering the long term. For this reason, this paper introduces extcolor{red}{ extbf{underline{c}}}ontinual few-sh extcolor{red}{ extbf{underline{o}}}t learning with extcolor{red}{ extbf{underline{lo}}}w- extcolor{red}{ extbf{underline{r}}}ank adaptation in GANs named CoLoR-GAN, a framework designed to handle both FS and CL together, leveraging low-rank tensors to efficiently adapt the model to target tasks while reducing even more the number of parameters required. Applying a vanilla LoRA implementation already permitted us to obtain pretty good results. In order to optimize even further the size of the adapters, we challenged LoRA limits introducing a LoRA in LoRA (LLoRA) technique for convolutional layers. Finally, aware of the criticality linked to the choice of the hyperparameters of LoRA, we provide an empirical study to easily find the best ones. We demonstrate the effectiveness of CoLoR-GAN through experiments on several benchmark CL and FS tasks and show that our model is efficient, reaching SOTA performance but with a number of resources enormously reduced. Source code is available on href{https://github.com/munsifali11/CoLoR-GAN}{Github.
Problem

Research questions and friction points this paper is trying to address.

Addresses continual learning in GANs with few-shot samples
Reduces catastrophic forgetting and parameter growth in adaptation
Optimizes low-rank tensor adapters for efficient task transitions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses low-rank tensors for efficient model adaptation
Introduces LoRA in LoRA technique for convolutional layers
Provides empirical study for optimal hyperparameter selection
🔎 Similar Papers
No similar papers found.