🤖 AI Summary
To address the stringent low-latency, low-bandwidth, and low-energy requirements of edge artificial intelligence (Edge AI), this paper proposes AirFL—an over-the-air federated learning framework that exploits the natural analog aggregation property of wireless channels to tightly integrate communication and model aggregation. Addressing heterogeneous channel state information (CSI) availability, we systematically introduce three AirFL design paradigms: CSIT-aware, blind, and weighted AirFL—redefining the architecture of over-the-air computation–driven distributed learning. Theoretically, we establish convergence analysis, communication-computation complexity models, and performance bounds for AirFL. Practically, we identify critical limiting factors including channel estimation error, device heterogeneity, and power control. Experiments demonstrate that AirFL reduces end-to-end latency by 3–5×, uplink bandwidth consumption by over 90%, and client energy consumption significantly compared to conventional digital federated learning. This work provides a scalable theoretical foundation and a systematic design methodology for wireless-native edge intelligence.
📝 Abstract
Over-the-Air Federated Learning (AirFL) is an emerging paradigm that tightly integrates wireless signal processing and distributed machine learning to enable scalable AI at the network edge. By leveraging the superposition property of wireless signals, AirFL performs communication and model aggregation of the learning process simultaneously, significantly reducing latency, bandwidth, and energy consumption. This article offers a tutorial treatment of AirFL, presenting a novel classification into three design approaches: CSIT-aware, blind, and weighted AirFL. We provide a comprehensive guide to theoretical foundations, performance analysis, complexity considerations, practical limitations, and prospective research directions.