🤖 AI Summary
To address the communication efficiency bottleneck of decentralized nonsmooth composite optimization (i.e., smooth + nonsmooth objectives) over poorly connected networks, this paper proposes MG-Skip—a novel algorithm integrating probabilistic local updates with multi-round gossip communication. Crucially, MG-Skip is the first method whose stepsize design is fully independent of both the number of local updates and network topology parameters. Under strong convexity, it provably skips most gossip rounds, substantially reducing communication overhead. Theoretical analysis establishes that local updates retain communication acceleration benefits even in nonsmooth settings. The algorithm achieves a computation complexity of $O(kappa log(1/varepsilon))$ and a significantly improved communication complexity of $Oig(sqrt{kappa/(1-
ho)} log(1/varepsilon)ig)$, where $kappa$ is the condition number and $
ho$ characterizes network connectivity—outperforming existing decentralized approaches.
📝 Abstract
Decentralized optimization methods with local updates have recently gained attention for their provable ability to communication acceleration. In these methods, nodes perform several iterations of local computations between the communication rounds. Nevertheless, this capability is effective only when the network is sufficiently well-connected and the loss function is smooth. In this paper, we propose a communication-efficient method <inline-formula><tex-math notation="LaTeX">$ extsc {MG-Skip}$</tex-math></inline-formula> with probabilistic local updates and multi-gossip communications for decentralized composite (smooth + nonsmooth) optimization, whose stepsize is independent of the number of local updates and the network topology. For any undirected and connected networks, <inline-formula><tex-math notation="LaTeX">$ extsc {MG-Skip}$</tex-math></inline-formula> allows for the multi-gossip communications to be skipped in most iterations in the strongly convex setting, while its computation complexity is <inline-formula><tex-math notation="LaTeX">$mathcal {O}(kappa log frac {1}{epsilon })$</tex-math></inline-formula> and communication complexity is only <inline-formula><tex-math notation="LaTeX">$mathcal {O}(sqrt{frac {kappa }{(1-
ho)}} log frac {1}{epsilon })$</tex-math></inline-formula>, where <inline-formula><tex-math notation="LaTeX">$kappa$</tex-math></inline-formula> is the condition number of the loss function, <inline-formula><tex-math notation="LaTeX">$
ho$</tex-math></inline-formula> reflects the connectivity of the network topology, and <inline-formula><tex-math notation="LaTeX">$epsilon$</tex-math></inline-formula> is the target accuracy. The theoretical results indicate that <inline-formula><tex-math notation="LaTeX">$ extsc {MG-Skip}$</tex-math></inline-formula> achieves provable communication acceleration, thereby validating the advantages of local updates in the nonsmooth setting.