Communication-Efficient Federated Learning by Quantized Variance Reduction for Heterogeneous Wireless Edge Networks

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high communication overhead, slow convergence, and sensitivity to device heterogeneity and channel instability in federated learning (FL) over heterogeneous wireless edge networks, this paper proposes FedQVR—a novel FL framework featuring the first variance-reduction mechanism under quantization constraints. FedQVR supports heterogeneous local updates and ultra-low-bit (1–2 bit) model transmission, while theoretically guaranteeing an $O(1/T)$ convergence rate. Furthermore, we design FedQVR-E, a resource-aware joint bandwidth and quantization-bit allocation algorithm tailored to non-ideal channel conditions. Extensive experiments demonstrate that FedQVR reduces communication volume by 3.2× compared to state-of-the-art efficient FL methods, improves model accuracy by 2.7%, and significantly enhances energy efficiency and robustness against device and channel fluctuations.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) has been recognized as a viable solution for local-privacy-aware collaborative model training in wireless edge networks, but its practical deployment is hindered by the high communication overhead caused by frequent and costly server-device synchronization. Notably, most existing communication-efficient FL algorithms fail to reduce the significant inter-device variance resulting from the prevalent issue of device heterogeneity. This variance severely decelerates algorithm convergence, increasing communication overhead and making it more challenging to achieve a well-performed model. In this paper, we propose a novel communication-efficient FL algorithm, named FedQVR, which relies on a sophisticated variance-reduced scheme to achieve heterogeneity-robustness in the presence of quantized transmission and heterogeneous local updates among active edge devices. Comprehensive theoretical analysis justifies that FedQVR is inherently resilient to device heterogeneity and has a comparable convergence rate even with a small number of quantization bits, yielding significant communication savings. Besides, considering non-ideal wireless channels, we propose FedQVR-E which enhances the convergence of FedQVR by performing joint allocation of bandwidth and quantization bits across devices under constrained transmission delays. Extensive experimental results are also presented to demonstrate the superior performance of the proposed algorithms over their counterparts in terms of both communication efficiency and application performance.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Communication Efficiency
Device Performance Variability
Innovation

Methods, ideas, or system contributions that make the work stand out.

FedQVR
Energy Efficiency
Wireless Network Optimization
🔎 Similar Papers
No similar papers found.
S
Shuai Wang
National Key Laboratory of Wireless Communications, University of Electronic Science and Technology of China, Chengdu, 611731, China
Y
Yanqing Xu
Shenzhen Research Institute of Big Data and School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen 518172, China
Chaoqun You
Chaoqun You
Fudan Univeristy
ML/AIwireless networking5G/6GO-RANNTN
Mingjie Shao
Mingjie Shao
Academy of Mathematics and Systems Science, Chinese Academy of Sciences
signal processingwireless communicationoptimizationmachine learning
T
Tony Q. S. Quek
Information Systems Technology and Design, Singapore University of Technology and Design, Singapore 487372