π€ AI Summary
Federated learning under differential privacy (DP) faces dual challenges: high communication overhead due to high-dimensional gradient transmissions and DP noise magnitude scaling with dimensionality $d$. This work observes that client gradient updates exhibit strong temporal correlation and effectively reside in a $k$-dimensional subspace ($k ll d$). To address this, we propose DOMEβa decentralized optimization framework featuring a correlation-aware compact sketch mechanism and orthogonal random probing to dynamically track gradient direction evolution while preserving historical information. DOME integrates low-dimensional projection, DP perturbation, and secure aggregation to jointly optimize communication and privacy efficiency. Theoretically, DOME satisfies $(varepsilon,delta)$-DP; per-round communication complexity reduces from $O(d)$ to $O(k)$; DP noise variance decreases to $sigma^2 k$; and gradient mean squared error approaches the theoretical lower bound.
π Abstract
Federated learning with differential privacy suffers from two major costs: each client must transmit $d$-dimensional gradients every round, and the magnitude of DP noise grows with $d$. Yet empirical studies show that gradient updates exhibit strong temporal correlations and lie in a $k$-dimensional subspace with $k ll d$. Motivated by this, we introduce DOME, a decentralized DP optimization framework in which each client maintains a compact sketch to project gradients into $mathbb{R}^k$ before privatization and Secure Aggregation. This reduces per-round communication from order $d$ to order $k$ and moves towards a gradient approximation mean-squared error of $Ο^2 k$. To allow the sketch to span new directions and prevent it from collapsing onto historical gradients, we augment it with random probes orthogonal to historical directions. We prove that our overall protocol satisfies $(Ξ΅,Ξ΄)$-Differential Privacy.