FedUNet: A Lightweight Additive U-Net Module for Federated Learning with Heterogeneous Models

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Most existing federated learning approaches assume homogeneous client models, limiting their applicability in realistic heterogeneous environments. To address this, we propose FedUNet—a lightweight, plug-and-play additive U-Net–style module embedded atop diverse client backbone networks, enabling cross-architecture knowledge transfer without structural alignment. Crucially, only compact bottleneck-layer parameters are shared across clients, drastically reducing communication overhead. The module leverages an encoder-decoder architecture with skip connections to jointly extract multi-scale features, facilitating client-agnostic representation learning. Extensive experiments under VGG-based model heterogeneity demonstrate that FedUNet achieves 93.11% and 92.68% test accuracy on two benchmark datasets, respectively, while incurring merely 0.89 MB of total communication cost—striking a favorable balance between high model performance and low bandwidth consumption.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables decentralized model training without sharing local data. However, most existing methods assume identical model architectures across clients, limiting their applicability in heterogeneous real-world environments. To address this, we propose FedUNet, a lightweight and architecture-agnostic FL framework that attaches a U-Net-inspired additive module to each client's backbone. By sharing only the compact bottleneck of the U-Net, FedUNet enables efficient knowledge transfer without structural alignment. The encoder-decoder design and skip connections in the U-Net help capture both low-level and high-level features, facilitating the extraction of clientinvariant representations. This enables cooperative learning between the backbone and the additive module with minimal communication cost. Experiment with VGG variants shows that FedUNet achieves 93.11% accuracy and 92.68% in compact form (i.e., a lightweight version of FedUNet) with only 0.89 MB low communication overhead.
Problem

Research questions and friction points this paper is trying to address.

Enables federated learning with heterogeneous client model architectures
Reduces communication overhead in decentralized model training
Extracts client-invariant representations without structural alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight U-Net additive module for FL
Architecture-agnostic framework with shared bottleneck
Encoder-decoder design with skip connections
🔎 Similar Papers
No similar papers found.
B
Beomseok Seo
Department of Mobile Systems Engineering, PRIMUS International College, Dankook University, Yong-in, Korea
Kichang Lee
Kichang Lee
Ph.D Student, Yonsei University
Machine learningDeep LearningMedical AIMobile ComputingSecurity
J
JaeYeon Park
Department of Mobile Systems Engineering, PRIMUS International College, Dankook University, Yong-in, Korea