🤖 AI Summary
This paper addresses the challenge of limited and inefficient computational budget allocation in real-time multi-agent path finding (RT-MAPF). We propose an agent-centric dynamic budget allocation mechanism—departing from conventional shared-budget-pool strategies—where each agent is individually allocated planning resources. Integrated into a windowed priority-based planning (PrP) scheme and the MAPF-LNS2 framework, our approach enables iterative local replanning. Our key contribution is the empirical identification and validation of the “agent-wise allocation outperforms global sharing” paradigm: under over-constrained conditions, it significantly improves solution success rate and path quality. Experiments demonstrate that our method solves more problem instances within a smaller total computational budget, reducing average makespan by 12.7%. The gains are especially pronounced in high-density and highly dynamic environments, confirming the efficacy of fine-grained, agent-specific resource allocation for scalable RT-MAPF.
📝 Abstract
Multi-Agent Pathfinding (MAPF) is the problem of finding paths for a set of agents such that each agent reaches its desired destination while avoiding collisions with the other agents. Many MAPF solvers are designed to run offline, that is, first generate paths for all agents and then execute them. Real-Time MAPF (RT-MAPF) embodies a realistic MAPF setup in which one cannot wait until a complete path for each agent has been found before they start to move. Instead, planning and execution are interleaved, where the agents must commit to a fixed number of steps in a constant amount of computation time, referred to as the planning budget. Existing solutions to RT-MAPF iteratively call windowed versions of MAPF algorithms in every planning period, without explicitly considering the size of the planning budget. We address this gap and explore different policies for allocating the planning budget in windowed versions of standard MAPF algorithms, namely Prioritized Planning (PrP) and MAPF-LNS2. Our exploration shows that the baseline approach in which all agents draw from a shared planning budget pool is ineffective in over-constrained situations. Instead, policies that distribute the planning budget over the agents are able to solve more problems with a smaller makespan.