Optimizing 6G Dense Network Deployment for the Metaverse Using Deep Reinforcement Learning

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the integrated access and backhaul (IAB) node deployment optimization problem for 6G-enabled metaverse applications in dense urban environments, aiming to minimize base station (BS) deployment cost while satisfying stringent requirements of immersive services—namely ultra-high data rates and massive connectivity—under dynamic urban conditions and strict spatial constraints. Method: We propose a novel Dueling Deep Q-Network (DQN) framework augmented with an action pruning mechanism, marking the first joint application of DQN, Double DQN, and Dueling DQN to IAB network planning. This design effectively tackles the scalability challenge posed by large state-action spaces. Contribution/Results: Experimental evaluation demonstrates that our approach reduces the required number of BSs by 12.3% on average, significantly outperforming conventional greedy heuristics. Moreover, it exhibits strong robustness and generalization across diverse initial source configurations, confirming its practical viability for adaptive, large-scale IAB deployment in 6G metaverse scenarios.

Technology Category

Application Category

📝 Abstract
As the Metaverse envisions deeply immersive and pervasive connectivity in 6G networks, Integrated Access and Backhaul (IAB) emerges as a critical enabler to meet the demanding requirements of massive and immersive communications. IAB networks offer a scalable solution for expanding broadband coverage in urban environments. However, optimizing IAB node deployment to ensure reliable coverage while minimizing costs remains challenging due to location constraints and the dynamic nature of cities. Existing heuristic methods, such as Greedy Algorithms, have been employed to address these optimization problems. This work presents a novel Deep Reinforcement Learning ( DRL) approach for IAB network planning, tailored to future 6G scenarios that seek to support ultra-high data rates and dense device connectivity required by immersive Metaverse applications. We utilize Deep Q-Network (DQN) with action elimination and integrate DQN, Double Deep Q-Network ( DDQN), and Dueling DQN architectures to effectively manage large state and action spaces. Simulations with various initial donor configurations demonstrate the effectiveness of our DRL approach, with Dueling DQN reducing node count by an average of 12.3% compared to traditional heuristics. The study underscores how advanced DRL techniques can address complex network planning challenges in 6G-enabled Metaverse contexts, providing an efficient and adaptive solution for IAB deployment in diverse urban environments.
Problem

Research questions and friction points this paper is trying to address.

Optimizing 6G IAB node deployment for Metaverse connectivity.
Addressing dynamic urban constraints and cost minimization challenges.
Using DRL to enhance network planning for immersive applications.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Reinforcement Learning optimizes 6G IAB networks.
DQN, DDQN, Dueling DQN manage large state spaces.
Dueling DQN reduces node count by 12.3%.
🔎 Similar Papers
No similar papers found.
J
Jie Zhang
School of Physics, Engineering and Technology, University of York, York YO10 5DD
S
S. Chetty
School of Physics, Engineering and Technology, University of York, York YO10 5DD
Qiao Wang
Qiao Wang
School of Information Science and Engineering, Southeast University
Urban Data AnalysisApplied Mathematics and Statistics
C
Chenrui Sun
School of Physics, Engineering and Technology, University of York, York YO10 5DD
P
Paul Daniel Mitchell
School of Physics, Engineering and Technology, University of York, York YO10 5DD
David Grace
David Grace
University of York
cognitive radiocognitive networksdynamic spectrum accessgreen communicationshigh altitude platforms
H
Hamed Ahmadi
School of Physics, Engineering and Technology, University of York, York YO10 5DD