Blending Participatory Design and Artificial Awareness for Trustworthy Autonomous Vehicles

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low trust between autonomous vehicles (AVs) and human users hinders safe and effective human-AV collaboration. Method: This study proposes a data-driven, context-aware human driver behavior modeling approach, integrating participatory design with the artificial consciousness theoretical framework to construct an interpretable Markovian driver model. The model explicitly encodes the influence of three key factors—system transparency level, dynamic environmental features, and user demographic heterogeneity—on driving decision transition probabilities. Contribution/Results: Validated through multi-source behavioral data analysis and human-in-the-loop interaction experiments, the model demonstrates that all three factors significantly improve both predictive accuracy and user trust. It enables high-reliability multi-agent collaborative decision-making and establishes a novel paradigm for explainable AI–enabled human-AV co-driving.

Technology Category

Application Category

📝 Abstract
Current robotic agents, such as autonomous vehicles (AVs) and drones, need to deal with uncertain real-world environments with appropriate situational awareness (SA), risk awareness, coordination, and decision-making. The SymAware project strives to address this issue by designing an architecture for artificial awareness in multi-agent systems, enabling safe collaboration of autonomous vehicles and drones. However, these agents will also need to interact with human users (drivers, pedestrians, drone operators), which in turn requires an understanding of how to model the human in the interaction scenario, and how to foster trust and transparency between the agent and the human. In this work, we aim to create a data-driven model of a human driver to be integrated into our SA architecture, grounding our research in the principles of trustworthy human-agent interaction. To collect the data necessary for creating the model, we conducted a large-scale user-centered study on human-AV interaction, in which we investigate the interaction between the AV's transparency and the users' behavior. The contributions of this paper are twofold: First, we illustrate in detail our human-AV study and its findings, and second we present the resulting Markov chain models of the human driver computed from the study's data. Our results show that depending on the AV's transparency, the scenario's environment, and the users' demographics, we can obtain significant differences in the model's transitions.
Problem

Research questions and friction points this paper is trying to address.

Designing artificial awareness for safe autonomous vehicle collaboration
Modeling human drivers to enhance human-AV trust and transparency
Analyzing AV transparency impact on user behavior via data-driven models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Blending participatory design with artificial awareness
Data-driven human driver model integration
Markov chain models from user study
🔎 Similar Papers
No similar papers found.
A
Ana Tanevska
Uppsala University, Uppsala, Sweden
A
Ananthapathmanabhan Ratheesh Kumar
Uppsala University, Uppsala, Sweden
A
Arabinda Ghosh
Max Planck Institute for Software Systems, Kaiserslautern, Germany
E
Ernesto Casablanca
Newcastle University, Newcastle upon Tyne, England
Ginevra Castellano
Ginevra Castellano
Professor at Uppsala University
Social roboticshuman-robot interactionaffective computingintelligent interactive systems
Sadegh Soudjani
Sadegh Soudjani
Professor and Chair in Cyber-Physical Systems | Max Planck Institute | University of Birmingham
Cyber-Physical SystemsSafe Autonomy & AIModel CheckingFormal MethodsQuantum Verification