Efficiency Boost in Decentralized Optimization: Reimagining Neighborhood Aggregation with Minimal Overhead

πŸ“… 2025-09-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address inefficient information aggregation caused by data heterogeneity in decentralized learning, this paper proposes DYNAWEIGHTβ€”a dynamic weighting framework that replaces static aggregation weights (e.g., Metropolis weights) with locally computed, loss-driven weights. Specifically, each node dynamically assigns higher aggregation weights to neighbors exhibiting larger discrepancies in local loss values, thereby prioritizing the fusion of more informative and divergent updates to accelerate convergence. The mechanism relies solely on local computation, incurs no additional communication or memory overhead, and is compatible with arbitrary optimizers and network topologies. Extensive experiments on MNIST, CIFAR-10, and CIFAR-100 demonstrate that DYNAWEIGHT consistently outperforms mainstream static-weighting baselines across diverse network scales and topologies, achieving significantly faster training convergence while maintaining minimal resource consumption. Its lightweight design, topology-agnostic nature, and optimizer independence underscore strong practicality and generalization capability.

Technology Category

Application Category

πŸ“ Abstract
In today's data-sensitive landscape, distributed learning emerges as a vital tool, not only fortifying privacy measures but also streamlining computational operations. This becomes especially crucial within fully decentralized infrastructures where local processing is imperative due to the absence of centralized aggregation. Here, we introduce DYNAWEIGHT, a novel framework to information aggregation in multi-agent networks. DYNAWEIGHT offers substantial acceleration in decentralized learning with minimal additional communication and memory overhead. Unlike traditional static weight assignments, such as Metropolis weights, DYNAWEIGHT dynamically allocates weights to neighboring servers based on their relative losses on local datasets. Consequently, it favors servers possessing diverse information, particularly in scenarios of substantial data heterogeneity. Our experiments on various datasets MNIST, CIFAR10, and CIFAR100 incorporating various server counts and graph topologies, demonstrate notable enhancements in training speeds. Notably, DYNAWEIGHT functions as an aggregation scheme compatible with any underlying server-level optimization algorithm, underscoring its versatility and potential for widespread integration.
Problem

Research questions and friction points this paper is trying to address.

Accelerates decentralized learning with minimal communication overhead
Dynamically weights neighbors based on relative local dataset losses
Improves training speeds in data-heterogeneous multi-agent networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic weight allocation based on relative losses
Minimal communication and memory overhead enhancement
Compatible with any server-level optimization algorithm
πŸ”Ž Similar Papers
No similar papers found.