Federated Unsupervised Domain Generalization using Global and Local Alignment of Gradients

📅 2024-05-25
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses cross-domain generalization in unsupervised federated learning, tackling for the first time the problem of adapting multi-source domain models to an unknown target domain—without labels, teacher models, or access to target-domain data. Theoretically, it establishes an intrinsic connection between domain shift and gradient distribution discrepancy. Building on this insight, we propose FedGaLA, a two-level gradient alignment framework: at the client level, local gradients are aligned to enhance feature invariance across domains; at the server level, global gradients are aligned to improve the domain robustness of the aggregated model. FedGaLA is the first method to introduce gradient alignment into unsupervised federated domain generalization. It achieves significant improvements over state-of-the-art approaches on four major benchmarks—PACS, OfficeHome, DomainNet, and TerraInc. Ablation studies validate the effectiveness of each component. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
We address the problem of federated domain generalization in an unsupervised setting for the first time. We first theoretically establish a connection between domain shift and alignment of gradients in unsupervised federated learning and show that aligning the gradients at both client and server levels can facilitate the generalization of the model to new (target) domains. Building on this insight, we propose a novel method named FedGaLA, which performs gradient alignment at the client level to encourage clients to learn domain-invariant features, as well as global gradient alignment at the server to obtain a more generalized aggregated model. To empirically evaluate our method, we perform various experiments on four commonly used multi-domain datasets, PACS, OfficeHome, DomainNet, and TerraInc. The results demonstrate the effectiveness of our method which outperforms comparable baselines. Ablation and sensitivity studies demonstrate the impact of different components and parameters in our approach. The source code is available at: https://github.com/MahdiyarMM/FedGaLA.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised Domain Adaptation
Multi-domain Learning
Model Generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

FedGaLA
Unsupervised Domain Adaptation
Gradient Alignment
🔎 Similar Papers
No similar papers found.