Enhancing Communication Efficiency in FL with Adaptive Gradient Quantization and Communication Frequency Optimization

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high communication overhead caused by frequent model updates in resource-constrained wireless federated learning, this paper proposes a triple-cooperative optimization framework: (1) adaptive feature pruning to reduce local computation and upload dimensionality; (2) a dynamic quantization mechanism guided by gradient novelty and error sensitivity, enabling high-fidelity gradient compression; and (3) joint optimization of communication frequency to balance convergence speed and transmission cost. Unlike conventional static quantization and fixed communication intervals, our approach adaptively coordinates computation, compression, and scheduling. Experimental results demonstrate that, while preserving model accuracy, the proposed method reduces communication overhead by up to 42.6%, decreases required convergence rounds by 31.8%, and significantly outperforms mainstream baseline methods in training efficiency.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables participant devices to collaboratively train deep learning models without sharing their data with the server or other devices, effectively addressing data privacy and computational concerns. However, FL faces a major bottleneck due to high communication overhead from frequent model updates between devices and the server, limiting deployment in resource-constrained wireless networks. In this paper, we propose a three-fold strategy. Firstly, an Adaptive Feature-Elimination Strategy to drop less important features while retaining high-value ones; secondly, Adaptive Gradient Innovation and Error Sensitivity-Based Quantization, which dynamically adjusts the quantization level for innovative gradient compression; and thirdly, Communication Frequency Optimization to enhance communication efficiency. We evaluated our proposed model's performance through extensive experiments, assessing accuracy, loss, and convergence compared to baseline techniques. The results show that our model achieves high communication efficiency in the framework while maintaining accuracy.
Problem

Research questions and friction points this paper is trying to address.

Reducing communication overhead in Federated Learning systems
Optimizing gradient quantization for efficient model updates
Balancing communication frequency with model accuracy preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive feature elimination for important feature retention
Dynamic gradient quantization for innovative compression
Communication frequency optimization enhances efficiency
🔎 Similar Papers
No similar papers found.
Asadullah Tariq
Asadullah Tariq
PUCIT, NUCES, QMUL, UAEU
Trustworthy AIWireless CommunicationFederated LearningEdgeAIQuantum ML
Tariq Qayyum
Tariq Qayyum
United Arab Emirates University (UAEU)
Distributed SimulationFog Computingvehicular networksFederated LearningData Privacy
Mohamed Adel Serhani
Mohamed Adel Serhani
College of Computing and Informatics, University of Sharjah
Cloud ComputingDeep LearningBig DataWeb services
F
Farag M. Sallabi
College of Information Technology, United Arab Emirates University, Al Ain, Abu Dhabi, UAE
I
Ikbal Taleb
College of Technological Innovation, Zayed University, Abu Dhabi.UAE
E
Ezedin S. Barka
College of Information Technology, United Arab Emirates University, Al Ain, Abu Dhabi, UAE