🤖 AI Summary
Local SGD in decentralized federated learning lacks asymptotic statistical guarantees. Method: This paper establishes, for the first time, a Berry–Esseen-type Gaussian approximation for the final iterate of Local SGD, achieving an $O(1/sqrt{n})$ convergence rate; it further proposes two time-uniform Gaussian approximations enabling functional central limit theorems over the entire training trajectory. Contributions/Results: The theoretical framework provides a rigorous foundation for multiplier bootstrap methods and enables online, dynamic detection of adversarial attacks. Crucially, it relaxes the classical i.i.d. assumption, accommodating non-i.i.d. and non-stationary stochastic processes. Numerical simulations demonstrate that the proposed approximations deliver high accuracy and strong robustness—particularly in small-sample and dynamically attacked settings—thereby significantly enhancing trustworthy hypothesis testing and anomaly identification under privacy-preserving frameworks.
📝 Abstract
Federated Learning has gained traction in privacy-sensitive collaborative environments, with local SGD emerging as a key optimization method in decentralized settings. While its convergence properties are well-studied, asymptotic statistical guarantees beyond convergence remain limited. In this paper, we present two generalized Gaussian approximation results for local SGD and explore their implications. First, we prove a Berry-Esseen theorem for the final local SGD iterates, enabling valid multiplier bootstrap procedures. Second, motivated by robustness considerations, we introduce two distinct time-uniform Gaussian approximations for the entire trajectory of local SGD. The time-uniform approximations support Gaussian bootstrap-based tests for detecting adversarial attacks. Extensive simulations are provided to support our theoretical results.