Human-Centric Traffic Signal Control for Equity: A Multi-Agent Action Branching Deep Reinforcement Learning Approach

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing traffic signal control methods, which are predominantly vehicle-centric, struggle with multi-agent coordination in high-dimensional discrete action spaces, and often neglect fairness for vulnerable road users such as pedestrians and passengers. To overcome these challenges, the authors propose the MA2B-DDQN framework, which introduces an innovative action-branching mechanism that decomposes the joint action space into two hierarchical discrete decisions: local green-light allocation per phase and global phase duration. A human-centric reward function is further designed, incorporating the number of delayed travelers as a penalty term. Evaluated on seven real-world intersections in Melbourne, the approach significantly reduces the number of affected travelers, outperforms state-of-the-art deep reinforcement learning and baseline methods, and demonstrates robust performance across diverse scenarios, thereby enhancing both system efficiency and fairness among multiple traveler groups.

Technology Category

Application Category

📝 Abstract
Coordinating traffic signals along multimodal corridors is challenging because many multi-agent deep reinforcement learning (DRL) approaches remain vehicle-centric and struggle with high-dimensional discrete action spaces. We propose MA2B-DDQN, a human-centric multi-agent action-branching double Deep Q-Network (DQN) framework that explicitly optimizes traveler-level equity. Our key contribution is an action-branching discrete control formulation that decomposes corridor control into (i) local, per-intersection actions that allocate green time between the next two phases and (ii) a single global action that selects the total duration of those phases. This decomposition enables scalable coordination under discrete control while reducing the effective complexity of joint decision-making. We also design a human-centric reward that penalizes the number of delayed individuals in the corridor, accounting for pedestrians, vehicle occupants, and transit passengers. Extensive evaluations across seven realistic traffic scenarios in Melbourne, Australia, demonstrate that our approach significantly reduces the number of impacted travelers, outperforming existing DRL and baseline methods. Experiments confirm the robustness of our model, showing minimal variance across diverse settings. This framework not only advocates for a fairer traffic signal system but also provides a scalable solution adaptable to varied urban traffic conditions.
Problem

Research questions and friction points this paper is trying to address.

traffic signal control
equity
multi-agent reinforcement learning
human-centric
discrete action space
Innovation

Methods, ideas, or system contributions that make the work stand out.

action branching
human-centric reinforcement learning
traffic signal control
traveler equity
multi-agent DRL
🔎 Similar Papers
No similar papers found.
X
Xiaocai Zhang
Department of Infrastructure Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, VIC 3010, Australia
Neema Nassir
Neema Nassir
Associate Professor, University of Melbourne
Transport big dataPublic transitTraffic simulationAutomated vehiclesMultimodal transport
L
Lok Sang Chan
Department of Infrastructure Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, VIC 3010, Australia
M
M. Haghani
Department of Infrastructure Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, VIC 3010, Australia