🤖 AI Summary
Existing closed-loop planning benchmarks for autonomous driving predominantly rely on rule-based reactive agents (e.g., IDM), resulting in limited behavioral diversity and poor interaction fidelity—leading to biased evaluation. To address this, we propose the first learning-based, reactive multi-agent simulation benchmark for closed-loop evaluation. Our approach introduces: (1) a denoising-decoupled diffusion model to generate high-fidelity, diverse traffic participant behaviors; (2) an interaction-aware agent selection mechanism that dynamically adapts to scene complexity; and (3) seamless integration into the nuPlan framework, enabling unified and fair evaluation of rule-based, learning-based, and hybrid planners. Experiments demonstrate that our benchmark significantly improves behavioral realism and human-likeness, more accurately reveals performance advantages of learning-based planners in dynamic interactive scenarios, and establishes a more credible, challenging standard for autonomous driving planning evaluation.
📝 Abstract
Recent advances in closed-loop planning benchmarks have significantly improved the evaluation of autonomous vehicles. However, existing benchmarks still rely on rule-based reactive agents such as the Intelligent Driver Model (IDM), which lack behavioral diversity and fail to capture realistic human interactions, leading to oversimplified traffic dynamics. To address these limitations, we present nuPlan-R, a new reactive closed-loop planning benchmark that integrates learning-based reactive multi-agent simulation into the nuPlan framework. Our benchmark replaces the rule-based IDM agents with noise-decoupled diffusion-based reactive agents and introduces an interaction-aware agent selection mechanism to ensure both realism and computational efficiency. Furthermore, we extend the benchmark with two additional metrics to enable a more comprehensive assessment of planning performance. Extensive experiments demonstrate that our reactive agent model produces more realistic, diverse, and human-like traffic behaviors, leading to a benchmark environment that better reflects real-world interactive driving. We further reimplement a collection of rule-based, learning-based, and hybrid planning approaches within our nuPlan-R benchmark, providing a clearer reflection of planner performance in complex interactive scenarios and better highlighting the advantages of learning-based planners in handling complex and dynamic scenarios. These results establish nuPlan-R as a new standard for fair, reactive, and realistic closed-loop planning evaluation. We will open-source the code for the new benchmark.