FedQS: Optimizing Gradient and Model Aggregation for Semi-Asynchronous Federated Learning

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In semi-asynchronous federated learning (SAFL), a fundamental trade-off exists between gradient aggregation (e.g., FedSGD) and model aggregation (e.g., FedAvg) regarding accuracy, convergence speed, and stability: the former achieves high accuracy and rapid convergence but suffers from instability, whereas the latter ensures robustness at the cost of slower convergence and lower accuracy. This work presents the first theoretical analysis of this performance dichotomy in SAFL and proposes a unified adaptive optimization framework. Our key contributions are: (1) a client categorization scheme into four types based on data distribution heterogeneity and computational capability; (2) a dual-path aggregation mechanism supporting both gradient- and model-level updates; and (3) a dynamic training control strategy for runtime adaptation. Evaluated across computer vision, natural language processing, and real-world tasks, our method achieves state-of-the-art accuracy, minimal loss, and among the fastest convergence rates—significantly outperforming existing SAFL approaches.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables collaborative model training across multiple parties without sharing raw data, with semi-asynchronous FL (SAFL) emerging as a balanced approach between synchronous and asynchronous FL. However, SAFL faces significant challenges in optimizing both gradient-based (e.g., FedSGD) and model-based (e.g., FedAvg) aggregation strategies, which exhibit distinct trade-offs in accuracy, convergence speed, and stability. While gradient aggregation achieves faster convergence and higher accuracy, it suffers from pronounced fluctuations, whereas model aggregation offers greater stability but slower convergence and suboptimal accuracy. This paper presents FedQS, the first framework to theoretically analyze and address these disparities in SAFL. FedQS introduces a divide-and-conquer strategy to handle client heterogeneity by classifying clients into four distinct types and adaptively optimizing their local training based on data distribution characteristics and available computational resources. Extensive experiments on computer vision, natural language processing, and real-world tasks demonstrate that FedQS achieves the highest accuracy, attains the lowest loss, and ranks among the fastest in convergence speed, outperforming state-of-the-art baselines. Our work bridges the gap between aggregation strategies in SAFL, offering a unified solution for stable, accurate, and efficient federated learning. The code and datasets are available at https://anonymous.4open.science/r/FedQS-EDD6.
Problem

Research questions and friction points this paper is trying to address.

Optimizing gradient and model aggregation in semi-asynchronous federated learning
Addressing trade-offs between accuracy, convergence speed, and stability
Handling client heterogeneity through adaptive local training optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Divides clients into four types adaptively
Optimizes local training based on data distribution
Unifies gradient and model aggregation strategies
🔎 Similar Papers
No similar papers found.
Y
Yunbo Li
School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
Jiaping Gui
Jiaping Gui
Assistant Professor, Shanghai Jiao Tong University
Network and System SecurityArtificial IntelligenceSoftware Engineering
Z
Zhihang Deng
School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
F
Fanchao Meng
School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
Y
Yue Wu
School of Computer Science, Shanghai Jiao Tong University, Shanghai, China