Multi-Layer Hierarchical Federated Learning with Quantization

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hierarchical federated learning (FL) frameworks are largely restricted to two-level aggregation, lacking scalability and flexibility for large-scale heterogeneous networks. Method: We propose the first multi-layer hierarchical FL framework supporting arbitrarily deep nested aggregation. Our approach introduces (1) a generalized multi-layer hierarchical FL architecture; (2) a layer-adaptive quantization mechanism tailored to heterogeneous communication and computation constraints across layers; and (3) a convergence theory jointly accounting for communication delay and computation cost, yielding optimal inner-loop iteration counts. Results: Experiments demonstrate that the framework maintains high accuracy under severe data heterogeneity, significantly outperforms heuristic parameter configurations, and achieves strong alignment between theoretical convergence rates and empirical performance—validating both efficacy and practicality.

Technology Category

Application Category

📝 Abstract
Almost all existing hierarchical federated learning (FL) models are limited to two aggregation layers, restricting scalability and flexibility in complex, large-scale networks. In this work, we propose a Multi-Layer Hierarchical Federated Learning framework (QMLHFL), which appears to be the first study that generalizes hierarchical FL to arbitrary numbers of layers and network architectures through nested aggregation, while employing a layer-specific quantization scheme to meet communication constraints. We develop a comprehensive convergence analysis for QMLHFL and derive a general convergence condition and rate that reveal the effects of key factors, including quantization parameters, hierarchical architecture, and intra-layer iteration counts. Furthermore, we determine the optimal number of intra-layer iterations to maximize the convergence rate while meeting a deadline constraint that accounts for both communication and computation times. Our results show that QMLHFL consistently achieves high learning accuracy, even under high data heterogeneity, and delivers notably improved performance when optimized, compared to using randomly selected values.
Problem

Research questions and friction points this paper is trying to address.

Extends hierarchical FL to multi-layer networks for scalability
Introduces quantization to address communication constraints in FL
Optimizes intra-layer iterations for convergence under deadline constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-layer hierarchical FL with nested aggregation
Layer-specific quantization for communication efficiency
Optimal intra-layer iterations for convergence rate