🤖 AI Summary
To address scalability and safety challenges in large-scale connected automated vehicles (CAVs) engaging in heterogeneous agent interactions under local information, this paper proposes a novel α-potential game framework. It establishes, for the first time, an analytical relationship between the game parameter α and interaction strength/asymmetry, overcoming key limitations of mean-field games in collision avoidance modeling and heterogeneity representation. By reformulating Nash equilibrium computation as distributed potential function optimization—and integrating decentralized neural network policies with policy gradient methods—the framework enables efficient, scalable cooperative decision-making across diverse traffic flow scenarios. Experiments demonstrate significant improvements in collision avoidance performance and accuracy of heterogeneous behavior modeling within complex, dynamic environments, while maintaining computational scalability and practical deployability.
📝 Abstract
Designing scalable and safe control strategies for large populations of connected and automated vehicles (CAVs) requires accounting for strategic interactions among heterogeneous agents under decentralized information. While dynamic games provide a natural modeling framework, computing Nash equilibria (NEs) in large-scale settings remains challenging, and existing mean-field game approximations rely on restrictive assumptions that fail to capture collision avoidance and heterogeneous behaviors. This paper proposes an $α$-potential game framework for decentralized CAV control. We show that computing $α$-NE reduces to solving a decentralized control problem, and derive tight bounds of the parameter $α$ based on interaction intensity and asymmetry. We further develop scalable policy gradient algorithms for computing $α$-NEs using decentralized neural-network policies. Numerical experiments demonstrate that the proposed framework accommodates diverse traffic flow models and effectively captures collision avoidance, obstacle avoidance, and agent heterogeneity.