SAFL: Structure-Aware Personalized Federated Learning via Client-Specific Clustering and SCSI-Guided Model Pruning

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In heterogeneous federated learning, personalized model training faces challenges of high computational and communication overheads, as well as pruning mismatch caused by scarce local data. To address these issues, this paper proposes a lightweight framework integrating client clustering with similarity-driven structured channel-wise pruning (SCSI). Specifically, it employs similarity-guided client clustering, a joint pruning criterion based on SCSI, and a two-stage prune-train-aggregate paradigm to enable structural-aware customization of personalized submodels. The method significantly reduces model size and communication load while preserving model accuracy and improving inference performance. Experimental results demonstrate superior performance over state-of-the-art personalized federated learning approaches across multiple heterogeneous benchmarks. Notably, this work achieves, for the first time, deep integration of clustering-guided personalization and structured pruning within the federated learning paradigm.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables clients to collaboratively train machine learning models without sharing local data, preserving privacy in diverse environments. While traditional FL approaches preserve privacy, they often struggle with high computational and communication overhead. To address these issues, model pruning is introduced as a strategy to streamline computations. However, existing pruning methods, when applied solely based on local data, often produce sub-models that inadequately reflect clients' specific tasks due to data insufficiency. To overcome these challenges, this paper introduces SAFL (Structure-Aware Federated Learning), a novel framework that enhances personalized federated learning through client-specific clustering and Similar Client Structure Information (SCSI)-guided model pruning. SAFL employs a two-stage process: initially, it groups clients based on data similarities and uses aggregated pruning criteria to guide the pruning process, facilitating the identification of optimal sub-models. Subsequently, clients train these pruned models and engage in server-based aggregation, ensuring tailored and efficient models for each client. This method significantly reduces computational overhead while improving inference accuracy. Extensive experiments demonstrate that SAFL markedly diminishes model size and improves performance, making it highly effective in federated environments characterized by heterogeneous data.
Problem

Research questions and friction points this paper is trying to address.

Privacy-preserving Federated Learning
Model Training Efficiency
Data Transmission Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

SAFL
Federated Learning
Data Similarity
🔎 Similar Papers
No similar papers found.
N
Nan Li
Engineering Research Center of Software/Hardware Co-Design Technology and Application, Ministry of Education, and the Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai 200062, China
Xiaolu Wang
Xiaolu Wang
Software Engineering Institute, East China Normal University
OptimizationData ScienceNetworked Systems
X
Xiao Du
Engineering Research Center of Software/Hardware Co-Design Technology and Application, Ministry of Education, and the Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai 200062, China
P
Puyu Cai
Computer Science Department, New York University, New York, NY 10012, United States
T
Ting Wang
Engineering Research Center of Software/Hardware Co-Design Technology and Application, Ministry of Education, and the Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai 200062, China