🤖 AI Summary
To address the challenge of achieving dynamic load balancing via manual handover parameter tuning in ultra-dense 5G cellular networks, this paper proposes a decentralized multi-agent reinforcement learning (MARL) framework. The method jointly models the coupled effects of three interdependent handover behaviors: serving-cell handover, neighbor-cell reselection, and mobility robustness optimization—marking the first such integrated formulation. We design a distributed MAPPO training mechanism based on consensus approximation, theoretically guaranteeing bounded error in global average load estimation. Leveraging wireless signaling modeling and 5G system-level simulation, our approach reduces inter-cell load standard deviation by 37% and increases system throughput by 19% over baselines under standardized scenarios, while significantly improving handover success rate and user quality of experience (QoE).
📝 Abstract
In cellular networks, cell handover refers to the process where a device switches from one base station to another, and this mechanism is crucial for balancing the load among different cells. Traditionally, engineers would manually adjust parameters based on experience. However, the explosive growth in the number of cells has rendered manual tuning impractical. Existing research tends to overlook critical engineering details in order to simplify handover problems. In this paper, we classify cell handover into three types, and jointly model their mutual influence. To achieve load balancing, we propose a multi-agent-reinforcement-learning (MARL)-based scheme to automatically optimize the parameters. To reduce the agent interaction costs, a distributed training is implemented based on consensus approximation of global average load, and it is shown that the approximation error is bounded. Experimental results show that our proposed scheme outperforms existing benchmarks in balancing load and improving network performance.