Few-Shot Continual Learning for 3D Brain MRI with Frozen Foundation Models

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a continual learning framework for 3D medical imaging under the challenging setting of limited labeled data, sequential task arrival, and no access to historical data replay. By leveraging a frozen 3D medical foundation model, the approach introduces task-specific low-rank adaptation (LoRA) modules alongside dedicated task heads, enabling multi-task adaptation without updating the backbone parameters. This architectural design inherently eliminates catastrophic forgetting, achieving backward transfer (BWT) of zero while training fewer than 0.1% of the model’s parameters—significantly outperforming full fine-tuning and linear probing. The method demonstrates strong performance on BraTS brain tumor segmentation (Dice = 0.62 ± 0.07) and IXI brain age estimation (MAE = 0.16 ± 0.05), offering an efficient solution for few-shot, replay-free continual learning in 3D medical image analysis.

Technology Category

Application Category

📝 Abstract
Foundation models pretrained on large-scale 3D medical imaging data face challenges when adapted to multiple downstream tasks under continual learning with limited labeled data. We address few-shot continual learning for 3D brain MRI by combining a frozen pretrained backbone with task-specific Low-Rank Adaptation (LoRA) modules. Tasks arrive sequentially -- tumor segmentation (BraTS) and brain age estimation (IXI) -- with no replay of previous task data. Each task receives a dedicated LoRA adapter; only the adapter and task-specific head are trained while the backbone remains frozen, thereby eliminating catastrophic forgetting by design (BWT=0). In continual learning, sequential full fine-tuning suffers severe forgetting (T1 Dice drops from 0.80 to 0.16 after T2), while sequential linear probing achieves strong T1 (Dice 0.79) but fails on T2 (MAE 1.45). Our LoRA approach achieves the best balanced performance across both tasks: T1 Dice 0.62$\pm$0.07, T2 MAE 0.16$\pm$0.05, with zero forgetting and $<$0.1\% trainable parameters per task, though with noted systematic age underestimation in T2 (Wilcoxon $p<0.001$). Frozen foundation models with task-specific LoRA adapters thus offer a practical solution when both tasks must be maintained under few-shot continual learning.
Problem

Research questions and friction points this paper is trying to address.

Few-Shot Continual Learning
3D Brain MRI
Catastrophic Forgetting
Foundation Models
Task Adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Few-Shot Continual Learning
Frozen Foundation Models
Low-Rank Adaptation (LoRA)
3D Brain MRI
Catastrophic Forgetting
🔎 Similar Papers
No similar papers found.
C
Chi-Sheng Chen
Independent Researcher, USA
Xinyu Zhang
Xinyu Zhang
Indiana University Bloomington
Deep LearningCyber SecuritySignal ProcessingInternet of ThingsQuantum Computation
G
Guan-Ying Chen
Independent Researcher, Taiwan
Q
Qiuzhe Xie
National Taiwan University, Taiwan
F
Fan Zhang
Boise State University, United States
E
En-Jui Kuo
National Yang Ming Chiao Tung University, Taiwan