Who Should I Listen To? Adaptive Collaboration in Personalized Federated Learning

πŸ“… 2025-06-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In federated learning, data heterogeneity often degrades personalized model performance relative to local training. Method: This paper proposes an adaptive, sample-level collaboration mechanism that departs from conventional parameter-sharing paradigms. It dynamically models fine-grained inter-client trust relationships based on prediction confidence, enabling data-aware collaboration. We introduce FEDMOSAICβ€”a framework integrating prediction exchange over unlabeled public data, loss reweighting, and dynamic pseudo-label contribution. Contribution/Results: FEDMOSAIC significantly outperforms state-of-the-art methods across diverse Non-IID settings. Under standard assumptions, it provides theoretical convergence guarantees for personalized federated learning. Notably, it is the first approach to elevate collaboration granularity from client-level to sample-level, simultaneously enhancing both generalization and personalization capabilities.

Technology Category

Application Category

πŸ“ Abstract
Data heterogeneity is a central challenge in federated learning, and personalized federated learning (PFL) aims to address it by tailoring models to each client's distribution. Yet many PFL methods fail to outperform local or centralized baselines, suggesting a mismatch between the collaboration they enforce and the structure of the data. We propose an approach based on adaptive collaboration, where clients decide adaptively not only how much to rely on others, but also whom to trust at the level of individual examples. We instantiate this principle in FEDMOSAIC, a federated co-training method in which clients exchange predictions over a shared unlabeled dataset. This enables fine-grained trust decisions that are difficult to achieve with parameter sharing alone. Each client adjusts its loss weighting based on the agreement between private and public data, and contributes to global pseudo-labels in proportion to its estimated per-example confidence. Empirically, FEDMOSAIC improves upon state-of-the-art PFL methods across diverse non-IID settings, and we provide convergence guarantees under standard assumptions. Our results demonstrate the potential of data-aware collaboration for robust and effective personalization.
Problem

Research questions and friction points this paper is trying to address.

Address data heterogeneity in federated learning through personalized models
Enable adaptive client collaboration for fine-grained trust decisions
Improve performance in non-IID settings with data-aware collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive collaboration among clients
Fine-grained trust decisions via predictions exchange
Loss weighting based on data agreement
πŸ”Ž Similar Papers
No similar papers found.