🤖 AI Summary
This work addresses downlink distortion and uplink over-the-air aggregation errors in wireless federated learning caused by heterogeneous device coherence times and bandwidth constraints. To tackle these challenges, the authors propose a coherence-aware joint communication-learning optimization framework. By partitioning OFDM superblocks into subblocks aligned with the shortest coherence time, reusing the global model of static devices as pilots for dynamic ones via product superposition, and leveraging prior local models to mitigate partial reception issues, the approach uniquely repurposes pilot overhead as effective payload. The method provides convergence guarantees under imperfect channel state information and aggregation noise, significantly improving communication efficiency, reducing latency, and achieving higher learning accuracy compared to conventional federated learning baselines.
📝 Abstract
Distributed machine learning (ML) over wireless networks hinges on accurate channel state information (CSI) and efficient exchange of high-dimensional model updates. These demands are governed by channel coherence time and bandwidth, which vary across devices (links) due to heterogeneous mobility and scattering, causing degraded downlink delivery and distorted uplink over-the-air (OTA) aggregation. We propose a coherence-aware federated learning (FL) framework that jointly addresses impairments on downlink and uplink with communication-efficient strategies. In the downlink, we employ product superposition to multiplex global model symbols for long-coherence (static) devices onto the pilot tones required by short-coherence (dynamic) devices for channel estimation, turning pilot overhead into payload while preserving estimation fidelity. In the proposed scheme, an orthogonal frequency-division multiplexing (OFDM) super-block is partitioned into sub-blocks aligned with the smallest coherence time and bandwidth, enabling consistent channel estimation and stabilizing OTA aggregation across heterogeneous devices. Partial model reception at dynamic devices is mitigated via previous local model filling (PLMF), which reuses prior updates. We establish convergence guarantees under heterogeneous link impairments, imperfect CSI, and aggregation noise. The proposed framework enables efficient scheduling under coherence heterogeneity; analysis and experiments demonstrate notable gains in communication efficiency, latency, and learning accuracy over conventional FL baselines.