🤖 AI Summary
This study addresses the challenge of accurately capturing users’ subjective risk perception in autonomous driving, where conventional facial expression recognition proves inadequate. Through a controlled driving simulation experiment, we simultaneously collected multimodal data—including facial expressions, electrodermal activity (EDA), heart rate, eye-tracking, and vehicle kinematics—and developed a multimodal neural network to predict subjective risk perception. Results demonstrate that facial expressions exhibit no stable correlation with subjective feelings of insecurity, highlighting the limitations of traditional affective computing in this context. In contrast, a model integrating EDA and vehicle motion features achieves significantly improved prediction accuracy (Pearson’s *r* = 0.78), offering high ecological validity and real-time assessment potential. This work provides the first empirical evidence that risk perception in autonomous driving is fundamentally non-expressive—i.e., decoupled from overt facial cues—and establishes a novel objective quantification paradigm grounded in physiological–vehicle signal coupling.
📝 Abstract
Trust and perceived safety play a crucial role in the public acceptance of automated vehicles. To understand perceived risk, an experiment was conducted using a driving simulator under two automated driving styles and optionally introducing a crossing pedestrian. Data was collected from 32 participants, consisting of continuous subjective comfort ratings, motion, webcam footage for facial expression, skin conductance, heart rate, and eye tracking. The continuous subjective perceived risk ratings showed significant discomfort associated with perceived risk during cornering and braking followed by relief or even positive comfort on continuing the ride. The dynamic driving style induced a stronger discomfort as compared to the calm driving style. The crossing pedestrian did not affect discomfort with the calm driving style but doubled the comfort decrement with the dynamic driving style. This illustrates the importance of consequences of critical interactions in risk perception. Facial expression was successfully analyzed for 24 participants but most (15/24) did not show any detectable facial reaction to the critical event. Among the 9 participants who did, 8 showed a Happy expression, and only 4 showed a Surprise expression. Fear was never dominant. This indicates that facial expression recognition is not a reliable method for assessing perceived risk in automated vehicles. To predict perceived risk a neural network model was implemented using vehicle motion and skin conductance. The model correlated well with reported perceived risk, demonstrating its potential for objective perceived risk assessment in automated vehicles, reducing subjective bias and highlighting areas for future research.