GC-Fed: Gradient Centralized Federated Learning with Partial Client Participation

πŸ“… 2025-03-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In federated learning under high data heterogeneity, severe client drift occurs, and conventional reference-model-based gradient correction methods suffer from instability in partial participation settings. Method: This paper proposes a reference-model-free hierarchical gradient centralization mechanism. It performs coordinated gradient correction on the feature extraction and classifier layers separately during local training and global aggregation, establishing a local-global two-stage hybrid optimization paradigm. Contribution/Results: We provide theoretical convergence guarantees showing strict improvement over standard FedAvg. Empirical evaluation under strong heterogeneity demonstrates up to a 20% absolute improvement in test accuracy over baseline methods, significantly outperforming existing approaches and achieving state-of-the-art performance.

Technology Category

Application Category

πŸ“ Abstract
Multi-source information fusion (MSIF) leverages diverse data streams to enhance decision-making, situational awareness, and system resilience. Federated Learning (FL) enables MSIF while preserving privacy but suffers from client drift under high data heterogeneity, leading to performance degradation. Traditional mitigation strategies rely on reference-based gradient adjustments, which can be unstable in partial participation settings. To address this, we propose Gradient Centralized Federated Learning (GC-Fed), a reference-free gradient correction method inspired by Gradient Centralization (GC). We introduce Local GC and Global GC, applying GC during local training and global aggregation, respectively. Our hybrid GC-Fed approach selectively applies GC at the feature extraction layer locally and at the classifier layer globally, improving training stability and model performance. Theoretical analysis and empirical results demonstrate that GC-Fed mitigates client drift and achieves state-of-the-art accuracy gains of up to 20% in heterogeneous settings.
Problem

Research questions and friction points this paper is trying to address.

Mitigates client drift in Federated Learning under data heterogeneity.
Proposes Gradient Centralized Federated Learning (GC-Fed) for stability.
Improves model accuracy by up to 20% in heterogeneous settings.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient Centralized Federated Learning (GC-Fed)
Local and Global Gradient Centralization (GC)
Hybrid GC-Fed for improved stability and accuracy
πŸ”Ž Similar Papers
No similar papers found.
J
Jungwon Seo
Department of Electrical Engineering and Computer Science, University of Stavanger, Stavanger, 4021, Norway
F
F. O. Catak
Department of Electrical Engineering and Computer Science, University of Stavanger, Stavanger, 4021, Norway
Chunming Rong
Chunming Rong
University of Stavanger
Security & BlockchainAICloud Computing
Kibeom Hong
Kibeom Hong
Department of Software, Sookmyung Women’s University, Seoul, 04310, South Korea
Minhoe Kim
Minhoe Kim
Department of Electrical and Information Engineering, Seoul National University of Science and Technology, Seoul, 01811, South Korea