🤖 AI Summary
Federated domain adaptation (FDA) confronts dual challenges: inter-domain distribution shift and severe label scarcity at the target client. Existing approaches either neglect the target-side data scarcity or fail to selectively transfer source knowledge according to the target task’s requirements. To address this, we propose a model functionality distance–guided adaptive aggregation method. Specifically, each target client computes the mean angular deviation of gradient fields to quantify functional divergence between local and source models; source models are then weighted via a Gompertz function—ensuring task-aware, non-linear normalization—and aggregated server-side. This enables targeted knowledge filtering and transfer within the federated learning framework. Extensive experiments on multiple real-world datasets demonstrate that our method achieves significantly higher test accuracy than state-of-the-art federated learning, personalized federated learning, and FDA baselines.
📝 Abstract
Federated Domain Adaptation (FDA) is a federated learning (FL) approach that improves model performance at the target client by collaborating with source clients while preserving data privacy. FDA faces two primary challenges: domain shifts between source and target data and limited labeled data at the target. Most existing FDA methods focus on domain shifts, assuming ample target data, yet often neglect the combined challenges of both domain shifts and data scarcity. Moreover, approaches that address both challenges fail to prioritize sharing relevant information from source clients according to the target's objective. In this paper, we propose FedDAF, a novel approach addressing both challenges in FDA. FedDAF uses similarity-based aggregation of the global source model and target model by calculating model functional distance from their mean gradient fields computed on target data. This enables effective model aggregation based on the target objective, constructed using target data, even with limited data. While computing model functional distance between these two models, FedDAF computes the angle between their mean gradient fields and then normalizes with the Gompertz function. To construct the global source model, all the local source models are aggregated using simple average in the server. Experiments on real-world datasets demonstrate FedDAF's superiority over existing FL, PFL, and FDA methods in terms of achieving better test accuracy.