🤖 AI Summary
Existing car-following models (CFMs) suffer from poor generalizability and lack formal stability guarantees, failing to meet autonomous driving’s stringent safety and robustness requirements. To address this, we propose a knowledge-guided deep car-following paradigm that integrates large language model (LLM)-derived prior knowledge with explicit local and string stability constraints. Our approach employs knowledge distillation for cross-dataset transfer (NGSIM/HighD) and introduces a stability-driven end-to-end training objective alongside a lightweight neural architecture. The resulting model significantly outperforms state-of-the-art physics-based, data-driven, and hybrid CFMs across three critical dimensions: behavioral fidelity (i.e., trajectory prediction accuracy), cross-scenario generalizability, and both theoretical (Lyapunov-based) and empirical stability. To our knowledge, this is the first CFM achieving simultaneous optimization of behavior realism and verifiable stability—bridging a fundamental gap between learning-based modeling and control-theoretic safety guarantees.
📝 Abstract
Car-following models (CFMs) are fundamental to traffic flow analysis and autonomous driving. Although calibrated physics-based and trained data-driven CFMs can replicate human driving behavior, their reliance on specific datasets limits generalization across diverse scenarios and reduces reliability in real-world deployment. Moreover, these models typically focus on behavioral fidelity and do not support the explicit optimization of local and string stability, which are increasingly important for the safe and efficient operation of autonomous vehicles (AVs). To address these limitations, we propose a Knowledge-Informed Deep Learning (KIDL) paradigm that distills the generalization capabilities of pre-trained Large Language Models (LLMs) into a lightweight and stability-aware neural architecture. LLMs are used to extract fundamental car-following knowledge beyond dataset-specific patterns, and this knowledge is transferred to a reliable, tractable, and computationally efficient model through knowledge distillation. KIDL also incorporates stability constraints directly into its training objective, ensuring that the resulting model not only emulates human-like behavior but also satisfies the local and string stability requirements essential for real-world AV deployment. We evaluate KIDL on the real-world NGSIM and HighD datasets, comparing its performance with representative physics-based, data-driven, and hybrid CFMs. Both empirical and theoretical results consistently demonstrate KIDL's superior behavioral generalization and traffic flow stability, offering a robust and scalable solution for next-generation traffic systems.