🤖 AI Summary
This work addresses decentralized learning in heterogeneous multicast networks under differential privacy and node energy budget constraints. We propose a privacy-preserving algorithm that jointly optimizes transmit power control and Gaussian noise injection, tailored to asymmetric channel conditions modeled by row-stochastic adjacency matrices—marking the first such co-design of transmission power and privacy noise levels. Theoretically, we establish an $O(log T)$ convergence rate. Experimentally, under identical privacy budgets ($varepsilon$) and energy constraints, our method achieves significantly higher model accuracy than existing decentralized differentially private approaches, while simultaneously improving communication efficiency, privacy guarantees, and convergence performance.
📝 Abstract
We propose a power-controlled differentially private decentralized learning algorithm designed for a set of clients aiming to collaboratively train a common learning model. The network is characterized by a row-stochastic adjacency matrix, which reflects different channel gains between the clients. In our privacy-preserving approach, both the transmit power for model updates and the level of injected Gaussian noise are jointly controlled to satisfy a given privacy and energy budget. We show that our proposed algorithm achieves a convergence rate of O(log T), where T is the horizon bound in the regret function. Furthermore, our numerical results confirm that our proposed algorithm outperforms existing works.