🤖 AI Summary
To address the uplink communication bottleneck in federated edge learning (FEEL) and the low recovery accuracy and poor robustness of existing over-the-air (OTA) aggregation under low signal-to-noise ratio (SNR), this paper proposes an end-to-end trainable digital OTA computation framework. Our method jointly optimizes a passive random access codebook, vector quantization, and an approximate message passing (AMP)-style decoder (AMP-DA-Net), and for the first time extends OTA aggregation to symmetric functions—including trimmed mean and majority voting. Furthermore, it integrates local statistical modeling with joint digital modulation and coding design. Experiments demonstrate that, under heterogeneous data distributions and dynamic device participation, the minimum SNR required for reliable OTA computation is reduced by over 10 dB; convergence accuracy matches or surpasses state-of-the-art methods across the full SNR range; and the framework exhibits strong robustness against message corruption and nonlinear distortions.
📝 Abstract
Federated edge learning (FEEL) enables wireless devices to collaboratively train a centralised model without sharing raw data, but repeated uplink transmission of model updates makes communication the dominant bottleneck. Over-the-air (OTA) aggregation alleviates this by exploiting the superposition property of the wireless channel, enabling simultaneous transmission and merging communication with computation. Digital OTA schemes extend this principle by incorporating the robustness of conventional digital communication, but current designs remain limited in low signal-to-noise ratio (SNR) regimes. This work proposes a learned digital OTA framework that improves recovery accuracy, convergence behaviour, and robustness to challenging SNR conditions while maintaining the same uplink overhead as state-of-the-art methods. The design integrates an unsourced random access (URA) codebook with vector quantisation and AMP-DA-Net, an unrolled approximate message passing (AMP)-style decoder trained end-to-end with the digital codebook and parameter server local training statistics. The proposed design extends OTA aggregation beyond averaging to a broad class of symmetric functions, including trimmed means and majority-based rules. Experiments on highly heterogeneous device datasets and varying numbers of active devices show that the proposed design extends reliable digital OTA operation by more than 10 dB into low SNR regimes while matching or improving performance across the full SNR range. The learned decoder remains effective under message corruption and nonlinear aggregation, highlighting the broader potential of end-to-end learned design for digital OTA communication in FEEL.