Toward Enhancing Representation Learning in Federated Multi-Task Settings

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of model and task heterogeneity in real-world federated multitask learning, where existing methods typically assume uniform model architectures across clients. To overcome this limitation, the authors propose FedMuscle, a novel algorithm that constructs a shared representation space instead of sharing model parameters directly. FedMuscle formulates cross-model representation alignment as maximizing mutual information among all client representations and introduces a new contrastive learning objective—termed Muscle loss—to enable effective knowledge fusion across heterogeneous tasks. The method is communication-efficient, accommodates arbitrary model and task heterogeneity, and demonstrates superior performance and robustness over state-of-the-art approaches across diverse image and language benchmarks.

Technology Category

Application Category

📝 Abstract
Federated multi-task learning (FMTL) seeks to collaboratively train customized models for users with different tasks while preserving data privacy. Most existing approaches assume model congruity (i.e., the use of fully or partially homogeneous models) across users, which limits their applicability in realistic settings. To overcome this limitation, we aim to learn a shared representation space across tasks rather than shared model parameters. To this end, we propose Muscle loss, a novel contrastive learning objective that simultaneously aligns representations from all participating models. Unlike existing multi-view or multi-model contrastive methods, which typically align models pairwise, Muscle loss can effectively capture dependencies across tasks because its minimization is equivalent to the maximization of mutual information among all the models'representations. Building on this principle, we develop FedMuscle, a practical and communication-efficient FMTL algorithm that naturally handles both model and task heterogeneity. Experiments on diverse image and language tasks demonstrate that FedMuscle consistently outperforms state-of-the-art baselines, delivering substantial improvements and robust performance across heterogeneous settings.
Problem

Research questions and friction points this paper is trying to address.

Federated Multi-Task Learning
Model Heterogeneity
Task Heterogeneity
Representation Learning
Shared Representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

federated multi-task learning
contrastive learning
representation alignment
model heterogeneity
mutual information
🔎 Similar Papers
No similar papers found.