Robo-taxi Fleet Coordination at Scale via Reinforcement Learning

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large-scale autonomous mobility-on-demand (AMoD) systems face critical challenges in coordinating fleets of driverless taxis—namely, low scheduling efficiency, poor generalizability across urban scales, and difficulty balancing fairness with scalability. Method: We propose the first end-to-end coordination framework integrating graph neural networks (GNNs) with proximal policy optimization (PPO), innovatively embedding operations research priors—such as heuristics derived from mixed-integer programming—to enhance decision interpretability and training convergence. The method supports spatiotemporal graph modeling and multi-granularity traffic simulation. Contribution/Results: Our approach achieves 23–37% higher scheduling efficiency and a 5.8× reduction in inference latency on standard benchmarks. Furthermore, we open-source the first standardized, network-level AMoD evaluation platform alongside a full-stack codebase, enabling reproducible, large-scale AMoD research.

Technology Category

Application Category

📝 Abstract
Fleets of robo-taxis offering on-demand transportation services, commonly known as Autonomous Mobility-on-Demand (AMoD) systems, hold significant promise for societal benefits, such as reducing pollution, energy consumption, and urban congestion. However, orchestrating these systems at scale remains a critical challenge, with existing coordination algorithms often failing to exploit the systems' full potential. This work introduces a novel decision-making framework that unites mathematical modeling with data-driven techniques. In particular, we present the AMoD coordination problem through the lens of reinforcement learning and propose a graph network-based framework that exploits the main strengths of graph representation learning, reinforcement learning, and classical operations research tools. Extensive evaluations across diverse simulation fidelities and scenarios demonstrate the flexibility of our approach, achieving superior system performance, computational efficiency, and generalizability compared to prior methods. Finally, motivated by the need to democratize research efforts in this area, we release publicly available benchmarks, datasets, and simulators for network-level coordination alongside an open-source codebase designed to provide accessible simulation platforms and establish a standardized validation process for comparing methodologies. Code available at: https://github.com/StanfordASL/RL4AMOD
Problem

Research questions and friction points this paper is trying to address.

Optimizing large-scale robo-taxi fleet coordination for efficiency
Integrating reinforcement learning with graph networks for AMoD systems
Developing open benchmarks for standardized AMoD methodology comparison
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for robo-taxi fleet coordination
Graph network-based decision-making framework
Open-source benchmarks and simulators for validation
🔎 Similar Papers
No similar papers found.