Communication Efficient, Differentially Private Distributed Optimization using Correlation-Aware Sketching

πŸ“… 2025-07-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Federated learning under differential privacy (DP) faces dual challenges: high communication overhead due to high-dimensional gradient transmissions and DP noise magnitude scaling with dimensionality $d$. This work observes that client gradient updates exhibit strong temporal correlation and effectively reside in a $k$-dimensional subspace ($k ll d$). To address this, we propose DOMEβ€”a decentralized optimization framework featuring a correlation-aware compact sketch mechanism and orthogonal random probing to dynamically track gradient direction evolution while preserving historical information. DOME integrates low-dimensional projection, DP perturbation, and secure aggregation to jointly optimize communication and privacy efficiency. Theoretically, DOME satisfies $(varepsilon,delta)$-DP; per-round communication complexity reduces from $O(d)$ to $O(k)$; DP noise variance decreases to $sigma^2 k$; and gradient mean squared error approaches the theoretical lower bound.

Technology Category

Application Category

πŸ“ Abstract
Federated learning with differential privacy suffers from two major costs: each client must transmit $d$-dimensional gradients every round, and the magnitude of DP noise grows with $d$. Yet empirical studies show that gradient updates exhibit strong temporal correlations and lie in a $k$-dimensional subspace with $k ll d$. Motivated by this, we introduce DOME, a decentralized DP optimization framework in which each client maintains a compact sketch to project gradients into $mathbb{R}^k$ before privatization and Secure Aggregation. This reduces per-round communication from order $d$ to order $k$ and moves towards a gradient approximation mean-squared error of $Οƒ^2 k$. To allow the sketch to span new directions and prevent it from collapsing onto historical gradients, we augment it with random probes orthogonal to historical directions. We prove that our overall protocol satisfies $(Ξ΅,Ξ΄)$-Differential Privacy.
Problem

Research questions and friction points this paper is trying to address.

Reduces communication cost in federated learning
Minimizes gradient approximation error with sketching
Ensures differential privacy in decentralized optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses correlation-aware sketching for gradient compression
Reduces communication cost from d to k dimensions
Augments sketch with random orthogonal probes
πŸ”Ž Similar Papers
No similar papers found.