🤖 AI Summary
Low trust between autonomous vehicles (AVs) and human users hinders safe and effective human-AV collaboration.
Method: This study proposes a data-driven, context-aware human driver behavior modeling approach, integrating participatory design with the artificial consciousness theoretical framework to construct an interpretable Markovian driver model. The model explicitly encodes the influence of three key factors—system transparency level, dynamic environmental features, and user demographic heterogeneity—on driving decision transition probabilities.
Contribution/Results: Validated through multi-source behavioral data analysis and human-in-the-loop interaction experiments, the model demonstrates that all three factors significantly improve both predictive accuracy and user trust. It enables high-reliability multi-agent collaborative decision-making and establishes a novel paradigm for explainable AI–enabled human-AV co-driving.
📝 Abstract
Current robotic agents, such as autonomous vehicles (AVs) and drones, need to deal with uncertain real-world environments with appropriate situational awareness (SA), risk awareness, coordination, and decision-making. The SymAware project strives to address this issue by designing an architecture for artificial awareness in multi-agent systems, enabling safe collaboration of autonomous vehicles and drones. However, these agents will also need to interact with human users (drivers, pedestrians, drone operators), which in turn requires an understanding of how to model the human in the interaction scenario, and how to foster trust and transparency between the agent and the human. In this work, we aim to create a data-driven model of a human driver to be integrated into our SA architecture, grounding our research in the principles of trustworthy human-agent interaction. To collect the data necessary for creating the model, we conducted a large-scale user-centered study on human-AV interaction, in which we investigate the interaction between the AV's transparency and the users' behavior. The contributions of this paper are twofold: First, we illustrate in detail our human-AV study and its findings, and second we present the resulting Markov chain models of the human driver computed from the study's data. Our results show that depending on the AV's transparency, the scenario's environment, and the users' demographics, we can obtain significant differences in the model's transitions.