Communication-Learning Co-Design for Differentially Private Over-the-Air Federated Distillation

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of high communication overhead and stringent differential privacy (DP) guarantees in federated learning (FL) for large-scale models, this paper proposes a Differential Privacy-enabled Over-the-Air Federated Knowledge Distillation (DP-AirFD) framework tailored for multi-access wireless channels. The method innovatively injects DP noise into low-dimensional model outputs and leverages the natural superposition property of wireless multi-access channels to enable privacy-preserving, analog-domain, concurrent model aggregation at the edge devices. By decoupling communication (power allocation) and learning (distillation step size) across two distinct time scales, we formulate a joint optimization model balancing convergence rate and privacy loss, yielding closed-form optimal transmit and receive policies. Experiments demonstrate that DP-AirFD significantly reduces communication load while satisfying device power and privacy budget constraints, achieving superior privacy-accuracy trade-offs compared to conventional FL and state-of-the-art DP-FL approaches.

Technology Category

Application Category

📝 Abstract
The ever-growing learning model size nowadays challenges the communication efficiency and privacy preservation of the traditional federated learning (FL). In this paper, we propose a novel differentially private (DP) over-the-air federated distillation (FD) framework, where wireless devices (WDs) periodically share noise-perturbed model outputs with the parameter server by harnessing the superposition property of multi-access channels. Accordingly, over-the-air FD enables the shared responsibility of the DP preservation on the low-dimensional disclosed signals among WDs. We study the communication-learning co-design problem in differentially private over-the-air FD, aiming to maximize the learning convergence rate while meeting the transmit power and DP requirements of WDs. The main challenge is rooted in the intractable learning and privacy analysis in over-the-air FD, together with the strong coupling among the decision variables spanning two timescales. To tackle this problem, we first derive the analytical learning convergence rate and privacy losses of WDs, based on which the optimal transceiver design per FD round and long-term training rounds decision are obtained in the closed forms. Numerical results demonstrate that the proposed differentially private over-the-air FD approach achieves a better learning-privacy trade-off with largely-reduced communication overhead than the conventional FL benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Maximizing learning convergence rate under privacy constraints
Addressing communication inefficiency in federated learning systems
Ensuring differential privacy in over-the-air model sharing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentially private over-the-air federated distillation
Harnesses multi-access channel superposition property
Closed-form optimal transceiver and training design
🔎 Similar Papers
No similar papers found.