π€ AI Summary
To address safety failures and insufficient real-time performance in decentralized multi-agent trajectory planning caused by localization uncertainty, this paper proposes a perception-enhanced, decentralized, and asynchronous robust planning framework. Methodologically, it introduces PRIMERβa lightweight imitation-learning-based planner trained on an optimized expert planner, PARM*. PRIMER integrates real-time perception inputs to enable dynamic obstacle avoidance and conflict resolution. Furthermore, neural network deployment optimizations and an asynchronous communication mechanism facilitate high-frequency online replanning. Experimental results demonstrate that, while guaranteeing strict safety constraints, the framework achieves up to 5,500Γ faster inference than state-of-the-art optimization-based methods. Moreover, trajectory quality and robustness match or exceed contemporary SOTA performance. The key contributions include: (1) the first lightweight imitation-learning planner for decentralized multi-agent navigation; (2) tight integration of real-time perception for safe, reactive planning; and (3) an asynchronous, deployable architecture enabling scalable, low-latency operation.
π Abstract
In decentralized multiagent trajectory planners, agents need to communicate and exchange their positions to generate collision-free trajectories. However, due to localization errors/uncertainties, trajectory deconfliction can fail even if trajectories are perfectly shared between agents. To address this issue, we first present PARM and PARM*, perception-aware, decentralized, asynchronous multiagent trajectory planners that enable a team of agents to navigate uncertain environments while deconflicting trajectories and avoiding obstacles using perception information. PARM* differs from PARM as it is less conservative, using more computation to find closer-to-optimal solutions. While these methods achieve state-of-the-art performance, they suffer from high computational costs as they need to solve large optimization problems onboard, making it difficult for agents to replan at high rates. To overcome this challenge, we present our second key contribution, PRIMER, a learning-based planner trained with imitation learning (IL) using PARM* as the expert demonstrator. PRIMER leverages the low computational requirements at deployment of neural networks and achieves a computation speed up to 5500 times faster than optimization-based approaches.