VissimRL: A Multi-Agent Reinforcement Learning Framework for Traffic Signal Control Based on Vissim

πŸ“… 2026-01-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a modular and extensible multi-agent reinforcement learning (MARL) training framework that integrates the high-fidelity traffic simulator VISSIM, which has been underutilized in existing RL research due to its complex COM interface and the absence of standardized integration protocols. By encapsulating VISSIM’s functionality through a high-level Python API, the framework establishes a unified interaction protocol for both single- and multi-agent settings, significantly lowering the barrier to entry while preserving simulation fidelity and computational efficiency. Experimental results demonstrate that the proposed approach effectively enhances traffic performance and enables robust cooperative control among agents. This framework serves as a general-purpose bridge between high-fidelity traffic simulation and MARL, facilitating the translation of academic advances into practical industrial applications in traffic signal control.

Technology Category

Application Category

πŸ“ Abstract
Traffic congestion remains a major challenge for urban transportation, leading to significant economic and environmental impacts. Traffic Signal Control (TSC) is one of the key measures to mitigate congestion, and recent studies have increasingly applied Reinforcement Learning (RL) for its adaptive capabilities. With respect to SUMO and CityFlow, the simulator Vissim offers high-fidelity driver behavior modeling and wide industrial adoption but remains underutilized in RL research due to its complex interface and lack of standardized frameworks. To address this gap, this paper proposes VissimRL, a modular RL framework for TSC that encapsulates Vissim's COM interface through a high-level Python API, offering standardized environments for both single- and multi-agent training. Experiments show that VissimRL significantly reduces development effort while maintaining runtime efficiency, and supports consistent improvements in traffic performance during training, as well as emergent coordination in multi-agent control. Overall, VissimRL demonstrates the feasibility of applying RL in high-fidelity simulations and serves as a bridge between academic research and practical applications in intelligent traffic signal control.
Problem

Research questions and friction points this paper is trying to address.

Traffic Signal Control
Vissim
Reinforcement Learning
Multi-Agent
Simulation Framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

VissimRL
multi-agent reinforcement learning
traffic signal control
high-fidelity simulation
Python API
πŸ”Ž Similar Papers
No similar papers found.
H
Hsiao-Chuan Chang
Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
S
Sheng-You Huang
Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
Yen-Chi Chen
Yen-Chi Chen
Department of Statistics, University of Washington
Nonparametric StatisticsMissing DataClusteringAstrostatistics
I-Chen Wu
I-Chen Wu
National Chiao Tung University
computer gamesArtificial Intelligence