Towards Seamless Hierarchical Federated Learning under Intermittent Client Participation: A Stagewise Decision-Making Methodology

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high latency, excessive energy consumption, and slow convergence in hierarchical federated learning (HFL) systems caused by intermittent client availability in edge computing, this paper proposes the first two-stage dynamic decision framework (Plan A/B). Stage I performs long-term participation planning via client availability modeling and probabilistic prediction; Stage II executes lightweight online backup selection—thereby decoupling the NP-hard joint optimization problem and balancing system stability with responsiveness. The framework is generalizable to conventional federated learning (FL). Extensive experiments on MNIST and CIFAR-10 demonstrate that our approach achieves up to 3.2% higher model accuracy and reduces total system cost (latency + energy consumption) by up to 28.7% compared to state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) offers a pioneering distributed learning paradigm that enables devices/clients to build a shared global model. This global model is obtained through frequent model transmissions between clients and a central server, which may cause high latency, energy consumption, and congestion over backhaul links. To overcome these drawbacks, Hierarchical Federated Learning (HFL) has emerged, which organizes clients into multiple clusters and utilizes edge nodes (e.g., edge servers) for intermediate model aggregations between clients and the central server. Current research on HFL mainly focus on enhancing model accuracy, latency, and energy consumption in scenarios with a stable/fixed set of clients. However, addressing the dynamic availability of clients -- a critical aspect of real-world scenarios -- remains underexplored. This study delves into optimizing client selection and client-to-edge associations in HFL under intermittent client participation so as to minimize overall system costs (i.e., delay and energy), while achieving fast model convergence. We unveil that achieving this goal involves solving a complex NP-hard problem. To tackle this, we propose a stagewise methodology that splits the solution into two stages, referred to as Plan A and Plan B. Plan A focuses on identifying long-term clients with high chance of participation in subsequent model training rounds. Plan B serves as a backup, selecting alternative clients when long-term clients are unavailable during model training rounds. This stagewise methodology offers a fresh perspective on client selection that can enhance both HFL and conventional FL via enabling low-overhead decision-making processes. Through evaluations on MNIST and CIFAR-10 datasets, we show that our methodology outperforms existing benchmarks in terms of model accuracy and system costs.
Problem

Research questions and friction points this paper is trying to address.

Optimizing client selection in intermittent hierarchical federated learning
Minimizing system costs while achieving fast model convergence
Solving NP-hard problem of dynamic client-edge associations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Federated Learning with intermittent clients
Stagewise decision-making with Plan A and Plan B
Optimizing client selection to minimize system costs
🔎 Similar Papers
No similar papers found.
M
Minghong Wu
School of Informatics, Xiamen University, Fujian, China
M
Minghui Liwang
Department of Control Science and Engineering, Tongji University, Shanghai, China; Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai, China
Yuhan Su
Yuhan Su
Xiamen University
L
Li Li
Department of Control Science and Engineering, Tongji University, Shanghai, China; Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai, China
S
Seyyedali Hosseinalipour
Department of Electrical Engineering, University at Buffalo–SUNY, NY, USA
X
Xianbin Wang
Department of Electrical and Computer Engineering, Western University, Ontario, Canada
Huaiyu Dai
Huaiyu Dai
Professor of Electrical and Computer Engineering, NC State University
CommunicationsSignal ProcessingNetworkingSecurity and PrivacyMachine Learning
Z
Zhenzhen Jiao
iF-Labs, Beijing Teleinfo Technology Co., Ltd., CAICT, China