Approximate Multiagent Reinforcement Learning for On-Demand Urban Mobility Problem on a Large Map

📅 2023-11-02
🏛️ IEEE International Conference on Robotics and Automation
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of scalable, real-time ride-hailing dispatch in large urban areas with spatiotemporally uncertain demand, this paper proposes a distributed multi-agent reinforcement learning framework that overcomes the computational bottleneck of conventional full-graph, vehicle-level rollout algorithms. Our method introduces a two-stage approximate rollout: first, dynamically partitioning the city graph based on empirical demand distribution; second, executing department-level rollout and instantaneous assignment (IA) in parallel across subregions. Theoretical analysis establishes system stability under a minimum fleet-size constraint. Experiments demonstrate that our approach matches the service performance of full-graph rollout while substantially reducing runtime. To the best of our knowledge, this is the first work to jointly integrate graph partitioning, rollout approximation, and stability guarantees into a multi-agent taxi dispatch framework—achieving a principled balance among scalability, real-time responsiveness, and theoretical rigor.
📝 Abstract
In this paper, we focus on the autonomous multiagent taxi routing problem for a large urban environment where the location and number of future ride requests are unknown a-priori, but can be estimated by an empirical distribution. Recent theory has shown that a rollout algorithm with a stable base policy produces a near-optimal stable policy. In the routing setting, a policy is stable if its execution keeps the number of outstanding requests uniformly bounded over time. Although, rollout-based approaches are well-suited for learning cooperative multiagent policies with considerations for future demand, applying such methods to a large urban environment can be computationally expensive due to the large number of taxis required for stability. In this paper, we aim to address the computational bottleneck of multiagent rollout by proposing an approximate multiagent rollout-based two phase algorithm that reduces computational costs, while still achieving a stable near-optimal policy. Our approach partitions the graph into sectors based on the predicted demand and the maximum number of taxis that can run sequentially given the user’s computational resources. The algorithm then applies instantaneous assignment (IA) for re-balancing taxis across sectors and a sector-wide multiagent rollout algorithm that is executed in parallel for each sector. We provide two main theoretical results: 1) characterize the number of taxis m that is sufficient for IA to be stable; 2) derive a necessary condition on m to maintain stability for IA as time goes to infinity. Our numerical results show that our approach achieves stability for an m that satisfies the theoretical conditions. We also empirically demonstrate that our proposed two phase algorithm has equivalent performance to the one-at-a-time rollout over the entire map, but with significantly lower runtimes.
Problem

Research questions and friction points this paper is trying to address.

Optimize multiagent taxi routing in large urban areas.
Reduce computational costs while maintaining policy stability.
Develop efficient, scalable algorithms for future demand adaptation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Approximate multiagent rollout algorithm
Graph partitioning based on demand
Parallel sector-wide rollout execution