🤖 AI Summary
This work addresses the inefficiency of model exchange and aggregation in decentralized federated learning over multi-hop wireless networks, where limited communication resources hinder performance. The authors propose a joint optimization framework that simultaneously selects routing paths and determines model pruning ratios to maximize model retention under communication latency constraints, thereby reducing model bias and accelerating convergence. By characterizing the coupling between model retention and transmission paths, the original problem is reformulated as a path selection task, integrating model pruning, routing optimization, convergence analysis, and delay-aware parameter scheduling to jointly enhance communication efficiency and learning performance. Experimental results demonstrate that, compared to an unpruned system, the proposed approach reduces average transmission latency by 27.8% and improves test accuracy by approximately 12%; it also achieves about 8% higher accuracy than baseline routing algorithms.
📝 Abstract
Decentralized federated learning (D-FL) enables privacy-preserving training without a central server, but multi-hop model exchanges and aggregation are often bottlenecked by communication resource constraints. To address this issue, we propose a joint routing-and-pruning framework that optimizes routing paths and pruning rates to maintain communication latency within prescribed limits. We analyze how the sum of model biases across all clients affects the convergence bound of D-FL and formulate an optimization problem that maximizes the model retention rate to minimize these biases under communication constraints. Further analysis reveals that each client's model retention rate is path-dependent, which reduces the original problem to a routing optimization. Leveraging this insight, we develop a routing algorithm that selects latency-efficient transmission paths, allowing more parameters to be delivered within the time budget and thereby improving D-FL convergence. Simulations demonstrate that, compared with unpruned systems, the proposed framework reduces average transmission latency by 27.8% and improves testing accuracy by approximately 12%. Furthermore, relative to standard benchmark routing algorithms, the proposed routing method improves accuracy by roughly 8%.