Decentralized Parameter-Free Online Learning

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the design of parameter-free algorithms with provable network regret guarantees for decentralized online learning. We propose the first algorithmic family that integrates multi-agent coin-flipping mechanisms with gossip communication, introducing a novel “betting function” analytical framework to uniformly characterize both individual and network-level regret behavior—thereby significantly simplifying multi-agent decentralized regret analysis. Theoretically, our method achieves a sublinear network regret bound of $O(sqrt{T})$ over connected communication graphs, without requiring any hyperparameter tuning. Empirical evaluation on synthetic benchmarks and real-world distributed sensing tasks confirms its robustness and efficiency. Key contributions include: (i) the first incorporation of coin-flipping strategies into decentralized online learning; (ii) the establishment of a general, parameter-free analytical paradigm; and (iii) a scalable, communication-efficient framework for distributed collaborative learning.

Technology Category

Application Category

📝 Abstract
We propose the first parameter-free decentralized online learning algorithms with network regret guarantees, which achieve sublinear regret without requiring hyperparameter tuning. This family of algorithms connects multi-agent coin-betting and decentralized online learning via gossip steps. To enable our decentralized analysis, we introduce a novel "betting function" formulation for coin-betting that simplifies the multi-agent regret analysis. Our analysis shows sublinear network regret bounds and is validated through experiments on synthetic and real datasets. This family of algorithms is applicable to distributed sensing, decentralized optimization, and collaborative ML applications.
Problem

Research questions and friction points this paper is trying to address.

Develop parameter-free decentralized online learning algorithms
Achieve sublinear network regret without hyperparameter tuning
Connect multi-agent coin-betting with decentralized learning via gossip
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-free decentralized online learning algorithms
Connects coin-betting with gossip steps
Novel betting function simplifies regret analysis
🔎 Similar Papers
No similar papers found.
T
Tomas Ortega
Center for Pervasive Communications & Computing and EECS Department, University of California, Irvine, Irvine, CA 92697 USA
Hamid Jafarkhani
Hamid Jafarkhani
Chancellor's Professor, University of California, Irvine