π€ AI Summary
Existing multi-agent reinforcement learning (MARL) approaches for cooperative collision avoidance among small-scale UAV swarms (β€3 agents) suffer from poor adaptability to continuous action spaces, high computational complexity, and excessive energy consumption. Method: We propose MACA, a centralized-training-with-decentralized-execution MARL algorithm featuring an actor-critic architecture and a novel marginalized state-action counterfactual baseline to address the credit assignment problem precisely. We further introduce MACAEnvβa physics-aware simulation environment that faithfully models UAV dynamics and inter-agent interaction constraints. Results: Experiments demonstrate that MACA achieves over 16% higher average reward than state-of-the-art MARL baselines; compared to conventional collision-avoidance methods, it reduces task failure rate by 90% and cuts response time by more than 99%. MACA exhibits strong robustness across diverse scenarios, significantly enhancing both flight safety and energy efficiency.
π Abstract
Multi-UAV collision avoidance is a challenging task for UAV swarm applications due to the need of tight cooperation among swarm members for collision-free path planning. Centralized Training with Decentralized Execution (CTDE) in Multi-Agent Reinforcement Learning is a promising method for multi-UAV collision avoidance, in which the key challenge is to effectively learn decentralized policies that can maximize a global reward cooperatively. We propose a new multi-agent critic-actor learning scheme called MACA for UAV swarm collision avoidance. MACA uses a centralized critic to maximize the discounted global reward that considers both safety and energy efficiency, and an actor per UAV to find decentralized policies to avoid collisions. To solve the credit assignment problem in CTDE, we design a counterfactual baseline that marginalizes both an agent's state and action, enabling to evaluate the importance of an agent in the joint observation-action space. To train and evaluate MACA, we design our own simulation environment MACAEnv to closely mimic the realistic behaviors of a UAV swarm. Simulation results show that MACA achieves more than 16% higher average reward than two state-of-the-art MARL algorithms and reduces failure rate by 90% and response time by over 99% compared to a conventional UAV swarm collision avoidance algorithm in all test scenarios.