🤖 AI Summary
This work proposes an implicit leader-follower cooperative navigation framework tailored for scenarios without inter-agent communication, dense obstacles, and external positioning. Only the leader drone is aware of the global goal, while the followers rely solely on onboard LiDAR for local perception and employ a deep reinforcement learning policy to navigate collectively—without explicit communication or identification of the leader. The approach integrates LiDAR point cloud clustering with an extended Kalman filter to robustly track neighboring agents, enabling emergent obstacle avoidance and formation behaviors based purely on local observations. Extensive evaluations in NVIDIA Isaac Sim simulations and real-world experiments with a fleet of five drones demonstrate robust collective navigation in complex indoor and outdoor environments, validating both the efficacy of the method and its successful sim-to-real transfer.
📝 Abstract
This paper presents a deep reinforcement learning (DRL) based controller for collective navigation of unmanned aerial vehicle (UAV) swarms in communication-denied environments, enabling robust operation in complex, obstacle-rich environments. Inspired by biological swarms where informed individuals guide groups without explicit communication, we employ an implicit leader-follower framework. In this paradigm, only the leader possesses goal information, while follower UAVs learn robust policies using only onboard LiDAR sensing, without requiring any inter-agent communication or leader identification. Our system utilizes LiDAR point clustering and an extended Kalman filter for stable neighbor tracking, providing reliable perception independent of external positioning systems. The core of our approach is a DRL controller, trained in GPU-accelerated Nvidia Isaac Sim, that enables followers to learn complex emergent behaviors - balancing flocking and obstacle avoidance - using only local perception. This allows the swarm to implicitly follow the leader while robustly addressing perceptual challenges such as occlusion and limited field-of-view. The robustness and sim-to-real transfer of our approach are confirmed through extensive simulations and challenging real-world experiments with a swarm of five UAVs, which successfully demonstrated collective navigation across diverse indoor and outdoor environments without any communication or external localization.