Composite Optimization with Error Feedback: the Dual Averaging Approach

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing error-feedback (EF) methods for composite optimization—featuring nonsmooth regularizers—in distributed machine learning lack theoretical convergence guarantees under communication compression and often fail to converge. Method: We propose the first EF convergence analysis framework tailored to general composite optimization, uncovering the fundamental limitations of conventional EF when applied to nondifferentiable structures. Our approach innovatively integrates dual averaging with the EControl error-feedback mechanism, yielding an inexact dual averaging analytical template. Contribution/Results: This framework establishes the first strong convergence guarantee—O(1/√T)—for composite objectives under compressed communication. Theoretical analysis and empirical evaluation demonstrate that our algorithm achieves significantly higher communication compression rates while maintaining convergence speed and generalization performance comparable to uncompressed baselines.

Technology Category

Application Category

📝 Abstract
Communication efficiency is a central challenge in distributed machine learning training, and message compression is a widely used solution. However, standard Error Feedback (EF) methods (Seide et al., 2014), though effective for smooth unconstrained optimization with compression (Karimireddy et al., 2019), fail in the broader and practically important setting of composite optimization, which captures, e.g., objectives consisting of a smooth loss combined with a non-smooth regularizer or constraints. The theoretical foundation and behavior of EF in the context of the general composite setting remain largely unexplored. In this work, we consider composite optimization with EF. We point out that the basic EF mechanism and its analysis no longer stand when a composite part is involved. We argue that this is because of a fundamental limitation in the method and its analysis technique. We propose a novel method that combines Dual Averaging with EControl (Gao et al., 2024), a state-of-the-art variant of the EF mechanism, and achieves for the first time a strong convergence analysis for composite optimization with error feedback. Along with our new algorithm, we also provide a new and novel analysis template for inexact dual averaging method, which might be of independent interest. We also provide experimental results to complement our theoretical findings.
Problem

Research questions and friction points this paper is trying to address.

Addressing error feedback limitations in composite optimization problems
Developing communication-efficient distributed learning with non-smooth regularizers
Establishing theoretical foundation for error feedback in constrained optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Dual Averaging with EControl mechanism
Achieves strong convergence for composite optimization
Provides new analysis template for inexact dual averaging
🔎 Similar Papers
No similar papers found.
Y
Yuan Gao
CISPA Helmholtz Center for Information Security, Germany
Anton Rodomanov
Anton Rodomanov
CISPA Helmholtz Center for Information Security
OptimizationMachine LearningNumerical MethodsComplexity Guarantees
J
Jeremy Rack
CISPA Helmholtz Center for Information Security, Germany
S
Sebastian U. Stich
CISPA Helmholtz Center for Information Security, Germany