GradMix: Gradient-based Selective Mixup for Robust Data Augmentation in Class-Incremental Learning

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In class-incremental continual learning, catastrophic forgetting severely impairs model retention of previously learned task knowledge. To address this, we propose Gradient-Driven Selective Mixup—a category-aware data augmentation method integrated within an experience replay framework. Our approach introduces a novel gradient-sensitive category-pair selection mechanism that dynamically assesses inter-class gradient similarity and confusion to determine Mixup feasibility; only category pairs conducive to beneficial knowledge transfer are mixed, thereby avoiding harmful interpolations. This shifts data augmentation from blind randomness to knowledge-guided synthesis. Evaluated on multiple standard benchmarks, our method achieves average accuracy improvements of 2.1–4.7% over mainstream augmentation baselines, while reducing forgetting on old tasks by up to 38%, significantly enhancing the stability of retained knowledge.

Technology Category

Application Category

📝 Abstract
In the context of continual learning, acquiring new knowledge while maintaining previous knowledge presents a significant challenge. Existing methods often use experience replay techniques that store a small portion of previous task data for training. In experience replay approaches, data augmentation has emerged as a promising strategy to further improve the model performance by mixing limited previous task data with sufficient current task data. However, we theoretically and empirically analyze that training with mixed samples from random sample pairs may harm the knowledge of previous tasks and cause greater catastrophic forgetting. We then propose GradMix, a robust data augmentation method specifically designed for mitigating catastrophic forgetting in class-incremental learning. GradMix performs gradient-based selective mixup using a class-based criterion that mixes only samples from helpful class pairs and not from detrimental class pairs for reducing catastrophic forgetting. Our experiments on various real datasets show that GradMix outperforms data augmentation baselines in accuracy by minimizing the forgetting of previous knowledge.
Problem

Research questions and friction points this paper is trying to address.

Mitigates catastrophic forgetting in class-incremental learning
Improves model performance with selective gradient-based mixup
Reduces harmful effects of random sample pair mixing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-based selective mixup for robust augmentation
Class-based criterion for beneficial sample mixing
Reduces catastrophic forgetting in incremental learning
🔎 Similar Papers
No similar papers found.