CarPLAN: Context-Adaptive and Robust Planning with Dynamic Scene Awareness for Autonomous Driving

πŸ“… 2026-03-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited contextual understanding of existing imitation learning methods in complex, dynamic traffic scenarios, which hinders robust and adaptive motion planning. To overcome this, we propose CarPLAN, a novel framework that introduces displacement-aware predictive encoding (DPE) to enhance spatial relationship modeling and incorporates a context-adaptive mixture-of-experts decoder (CMD) that dynamically activates experts across Transformer layers to align with scene structure. By integrating relative displacement prediction error into the loss function, our model achieves state-of-the-art performance across all metrics in nuPlan closed-loop simulation, demonstrating particularly strong results in challenging scenarios such as Test14-Hard. Furthermore, experiments on Waymax confirm the model’s generalization capability across datasets.

Technology Category

Application Category

πŸ“ Abstract
Imitation learning (IL) is widely used for motion planning in autonomous driving due to its data efficiency and access to real-world driving data. For safe and robust real-world driving, IL-based planning requires capturing the complex driving contexts inherent in real-world data and enabling context-adaptive decision-making, rather than relying solely on expert trajectory imitation. In this paper, we propose CarPLAN, a novel IL-based motion planning framework that explicitly enhances driving context understanding and enables adaptive planning across diverse traffic scenarios. Our contributions are twofold: We introduce Displacement-Aware Predictive Encoding (DPE) to improve the model's spatial awareness by predicting future displacement vectors between the Autonomous Vehicle (AV) and surrounding scene elements. This allows the planner to account for relational spacing when generating trajectories. In addition to the standard imitation loss, we incorporate an augmented loss term that captures displacement prediction errors, ensuring planning decisions consider relative distances from other agents. To improve the model's ability to handle diverse driving contexts, we propose Context-Adaptive Multi-Expert Decoder (CMD), which leverages the Mixture of Experts (MoE) framework. CMD dynamically selects the most suitable expert decoders based on scene structure at each Transformer layer, enabling adaptive and context-aware planning in dynamic environments. We evaluate CarPLAN on the nuPlan benchmark and demonstrate state-of-the-art performance across all closed-loop simulation metrics. In particular, CarPLAN exhibits robust performance on challenging scenarios such as Test14-Hard, validating its effectiveness in complex driving conditions. Additional experiments on the Waymax benchmark further demonstrate its generalization capability across different benchmark settings.
Problem

Research questions and friction points this paper is trying to address.

Imitation Learning
Autonomous Driving
Context-Adaptive Planning
Dynamic Scene Awareness
Motion Planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Displacement-Aware Predictive Encoding
Context-Adaptive Multi-Expert Decoder
Imitation Learning
Mixture of Experts
Dynamic Scene Awareness
πŸ”Ž Similar Papers
No similar papers found.
J
Junyong Yun
Department of Artificial Intelligence, Hanyang University, 04763, Republic of Korea
J
Jungho Kim
Interdisciplinary Program in Artificial Intelligence, Seoul National University, 08826, Seoul, Republic of Korea
B
ByungHyun Lee
Department of Artificial Intelligence, Hanyang University, 04763, Republic of Korea
D
Dongyoung Lee
Department of Electrical and Computer Engineering, Seoul National University, 08826, Seoul, Republic of Korea
S
Sehwan Choi
Department of Artificial Intelligence, Hanyang University, 04763, Republic of Korea
S
Seunghyeop Nam
Interdisciplinary Program in Artificial Intelligence, Seoul National University, 08826, Seoul, Republic of Korea
K
Kichun Jo
Department of Automotive Engineering, Hanyang University, 04763, Republic of Korea
Jun Won Choi
Jun Won Choi
Department of Electrical and Computer Engineering, Seoul National University
Artificial intelligenceAutonomous driving robots/vehiclesRobot perceptionSensor fusion