π€ AI Summary
To address degraded wireless network QoS caused by challenging UAV base station placement in dynamic obstacle environments, this paper proposes RLpos-3βa novel end-to-end reinforcement learning framework. RLpos-3 enables closed-loop training and evaluation of obstacle-aware, QoS-driven real-time 3D UAV positioning policies by achieving deep integration between standard RL algorithm libraries (PPO/SAC) and the ns-3 network simulator. Leveraging Python binding interfaces and high-fidelity 3D channel modeling, it establishes a reproducible and scalable simulation validation paradigm. Evaluated in realistic urban scenarios, UAVs deployed using RLpos-3 achieve a 37% improvement in average user throughput and increase end-to-end latency compliance rate to 92%, demonstrating the frameworkβs effectiveness and robustness in complex wireless environments.
π Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly being utilized to enhance the Quality of Service (QoS) in wireless networks due to their flexibility and cost-effectiveness. However, optimizing UAV placement in dynamic and obstacle-prone environments remains a research challenge. Reinforcement Learning (RL) has proven to be an effective approach that offers adaptability and robustness in such environments. This paper introduces RLpos-3, a novel framework that integrates standard RL techniques and existing libraries with Network Simulator 3 (ns-3) to facilitate the development and evaluation of UAV positioning algorithms. RLpos-3 serves as a complementary tool for researchers, enabling the implementation, analysis, and benchmarking of UAV positioning strategies across different environmental settings while ensuring user traffic demands are met. To validate its effectiveness, we present a use case demonstrating the performance of RLpos-3 in optimizing UAV placement under realistic conditions.