On the Convergence of Continual Federated Learning Using Incrementally Aggregated Gradients

๐Ÿ“… 2024-11-12
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address severe global catastrophic forgetting and the trade-off between privacy preservation and scalability in continual federated learning (CFL), this paper proposes C-FLAG: a framework featuring lightweight replay buffers at edge devices, incremental gradient aggregation combining memory-based and current-task gradients, and an adaptive learning rate mechanism to jointly mitigate forgetting and bias. We establish the first convergence theory for CFL, proving an $O(1/sqrt{T})$ convergence rate. Experiments under both task-incremental and class-incremental settings demonstrate that C-FLAG significantly outperforms state-of-the-art methodsโ€”achieving up to 8.2% higher average accuracy and reducing average forgetting by 37%. Moreover, C-FLAG ensures strong privacy protection, high communication efficiency, and enhanced model robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
The holy grail of machine learning is to enable Continual Federated Learning (CFL) to enhance the efficiency, privacy, and scalability of AI systems while learning from streaming data. The primary challenge of a CFL system is to overcome global catastrophic forgetting, wherein the accuracy of the global model trained on new tasks declines on the old tasks. In this work, we propose Continual Federated Learning with Aggregated Gradients (C-FLAG), a novel replay-memory based federated strategy consisting of edge-based gradient updates on memory and aggregated gradients on the current data. We provide convergence analysis of the C-FLAG approach which addresses forgetting and bias while converging at a rate of $O(1/sqrt{T})$ over $T$ communication rounds. We formulate an optimization sub-problem that minimizes catastrophic forgetting, translating CFL into an iterative algorithm with adaptive learning rates that ensure seamless learning across tasks. We empirically show that C-FLAG outperforms several state-of-the-art baselines on both task and class-incremental settings with respect to metrics such as accuracy and forgetting.
Problem

Research questions and friction points this paper is trying to address.

Continual Federated Learning
Knowledge Retention
Privacy Preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

C-FLAG
Continuous Federated Learning
Disaster-forget Prevention
๐Ÿ”Ž Similar Papers
No similar papers found.