Caching Techniques for Reducing the Communication Cost of Federated Learning in IoT Environments

๐Ÿ“… 2025-07-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high communication overhead and resource constraints of edge devices in Internet-of-Things (IoT)-based federated learning (FL), this paper proposes a cache-aware model update selection mechanism. We deploy FIFO, LRU, and priority-driven caching strategies at edge nodes to intelligently filter and forward high-value model updatesโ€”marking the first systematic integration of caching into FL communication optimization. Experiments on CIFAR-10 and a real-world medical dataset demonstrate that our approach reduces total communication volume by up to 62%, incurs negligible accuracy degradation (<0.8%), and improves memory utilization and training scalability. The method is particularly suited for latency-sensitive, high-concurrency edge applications such as smart cities and telemedicine. By leveraging caching to decouple communication efficiency from model quality, this work establishes a novel paradigm for lightweight, scalable federated learning in resource-constrained IoT environments.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated Learning (FL) allows multiple distributed devices to jointly train a shared model without centralizing data, but communication cost remains a major bottleneck, especially in resource-constrained environments. This paper introduces caching strategies - FIFO, LRU, and Priority-Based - to reduce unnecessary model update transmissions. By selectively forwarding significant updates, our approach lowers bandwidth usage while maintaining model accuracy. Experiments on CIFAR-10 and medical datasets show reduced communication with minimal accuracy loss. Results confirm that intelligent caching improves scalability, memory efficiency, and supports reliable FL in edge IoT networks, making it practical for deployment in smart cities, healthcare, and other latency-sensitive applications.
Problem

Research questions and friction points this paper is trying to address.

Reducing communication cost in Federated Learning for IoT
Optimizing model update transmissions using caching strategies
Maintaining accuracy while lowering bandwidth usage in FL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Caching strategies reduce FL communication cost
Selective forwarding of significant model updates
Improved scalability and memory efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Ahmad Alhonainy
Department of Electrical Engineering & Computer Science, The University of Missouri, Columbia, USA
Praveen Rao
Praveen Rao
Associate Professor, Electrical Engineering & Computer Science
Data ManagementData ScienceHealth InformaticsCybersecurity