UnifiedFL: A Dynamic Unified Learning Framework for Equitable Federation

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing federated learning (FL) struggles with three concurrent challenges: client-level model architecture heterogeneity (e.g., CNNs, GNNs, MLPs), non-IID data distributions, and test-domain shift. To address these, we propose UnifiedFL—a framework that models heterogeneous local architectures as directed model graphs and unifies their parameterization via a shared graph neural network (GNN) backbone. UnifiedFL introduces distance-driven dynamic clustering and a two-level aggregation mechanism to balance convergence and model diversity. Crucially, it enables truly architecture-level heterogeneous collaborative training without requiring model alignment or knowledge distillation. Evaluated on MedMNIST classification and hippocampal segmentation tasks, UnifiedFL outperforms state-of-the-art FL methods, achieving up to 12.3% improvement in generalization performance, enhanced cross-domain adaptability, and effective mitigation of domain fragmentation.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) has emerged as a key paradigm for collaborative model training across multiple clients without sharing raw data, enabling privacy-preserving applications in areas such as radiology and pathology. However, works on collaborative training across clients with fundamentally different neural architectures and non-identically distributed datasets remain scarce. Existing FL frameworks face several limitations. Despite claiming to support architectural heterogeneity, most recent FL methods only tolerate variants within a single model family (e.g., shallower, deeper, or wider CNNs), still presuming a shared global architecture and failing to accommodate federations where clients deploy fundamentally different network types (e.g., CNNs, GNNs, MLPs). Moreover, existing approaches often address only statistical heterogeneity while overlooking the domain-fracture problem, where each client's data distribution differs markedly from that faced at testing time, undermining model generalizability. When clients use different architectures, have non-identically distributed data, and encounter distinct test domains, current methods perform poorly. To address these challenges, we propose UnifiedFL, a dynamic federated learning framework that represents heterogeneous local networks as nodes and edges in a directed model graph optimized by a shared graph neural network (GNN). UnifiedFL introduces (i) a common GNN to parameterize all architectures, (ii) distance-driven clustering via Euclidean distances between clients' parameters, and (iii) a two-tier aggregation policy balancing convergence and diversity. Experiments on MedMNIST classification and hippocampus segmentation benchmarks demonstrate UnifiedFL's superior performance. Code and data: https://github.com/basiralab/UnifiedFL
Problem

Research questions and friction points this paper is trying to address.

Addressing architectural heterogeneity across different neural network types in federated learning
Solving domain-fracture problem where client data differs from test distributions
Handling non-identically distributed datasets across clients with diverse architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Representing heterogeneous networks as directed model graph
Using shared GNN to parameterize all client architectures
Employing distance-driven clustering and two-tier aggregation policy
🔎 Similar Papers
No similar papers found.
F
Furkan Pala
BASIRA Lab, Imperial-X (I-X) and Department of Computing, Imperial College London, London, United Kingdom
Islem Rekik
Islem Rekik
BASIRA lab
Machine and Deep LearningNeuroimagingNetwork NeurociencePredictive Intelligence in MedicineConnectomics