🤖 AI Summary
To address the joint optimization of link scheduling and resource slicing under dynamic interference, time-varying topologies, and antenna constraints in millimeter-wave Integrated Access and Backhaul (IAB) networks, this paper proposes a decentralized cooperative reinforcement learning framework. The framework integrates a greedy Double Deep Q-Network (DDQN)-based scheduler with a multi-agent DDQN-based resource allocator to enable joint adaptive configuration of spectrum and beamforming resources. Innovatively, it incorporates a dynamic topology-aware mechanism and a network-slicing-driven reward model. Evaluated across 96 randomly generated topologies, the approach achieves a 99.84% scheduling accuracy and improves end-to-end throughput by 20.90% over baseline methods. Designed for low latency and high scalability, the solution is particularly suited for dense, resource-constrained 5G-Advanced and 6G IAB deployments.
📝 Abstract
Integrated Access and Backhaul (IAB) is critical for dense 5G and beyond deployments, especially in mmWave bands where fiber backhaul is infeasible. We propose a novel Deep Reinforcement Learning (DRL) framework for joint link scheduling and resource slicing in dynamic, interference-prone IAB networks. Our method integrates a greedy Double Deep Q-Network (DDQN) scheduler to activate access and backhaul links based on traffic and topology, with a multi-agent DDQN allocator for bandwidth and antenna assignment across network slices. This decentralized approach respects strict antenna constraints and supports concurrent scheduling across heterogeneous links. Evaluations across 96 dynamic topologies show 99.84 percent scheduling accuracy and 20.90 percent throughput improvement over baselines. The framework's efficient operation and adaptability make it suitable for dynamic and resource-constrained deployments, where fast link scheduling and autonomous backhaul coordination are vital.