Modular Distributed Nonconvex Learning with Error Feedback

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the error accumulation problem induced by stochastic compression in distributed nonconvex learning. Methodologically, we propose a modular ADMM-gradient hybrid algorithm framework that synergistically integrates the robustness of ADMM with the efficiency of gradient-based methods, and introduce a stochastic integral-type error-feedback mechanism to eliminate compression errors almost surely. Theoretically, we establish, for the first time, almost-sure convergence of the algorithm to the set of stationary points under nonconvexity, leveraging a rigorous stochastic time-scale separation principle. Experimentally, the proposed method substantially suppresses compression noise, improves both communication efficiency and convergence stability, and demonstrates superior performance on distributed nonconvex classification tasks.

Technology Category

Application Category

📝 Abstract
In this paper, we design a novel distributed learning algorithm using stochastic compressed communications. In detail, we pursue a modular approach, merging ADMM and a gradient-based approach, benefiting from the robustness of the former and the computational efficiency of the latter. Additionally, we integrate a stochastic integral action (error feedback) enabling almost sure rejection of the compression error. We analyze the resulting method in nonconvex scenarios and guarantee almost sure asymptotic convergence to the set of stationary points of the problem. This result is obtained using system-theoretic tools based on stochastic timescale separation. We corroborate our findings with numerical simulations in nonconvex classification.
Problem

Research questions and friction points this paper is trying to address.

Develops a distributed learning algorithm with stochastic compressed communications.
Combines ADMM and gradient-based methods for robustness and efficiency.
Ensures convergence to stationary points in nonconvex optimization problems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular approach combining ADMM and gradient-based methods
Stochastic integral action for error feedback
System-theoretic tools for convergence analysis
🔎 Similar Papers
No similar papers found.