🤖 AI Summary
This study investigates how varying intensities of empathic expression in virtual humans affect user experience in emotional counseling scenarios. Moving beyond binary empathic presence/absence, empathy is modeled as a tunable continuous variable. Three experimental conditions are designed: neutral dialogue, conversational empathy, and video-driven empathy (generated via the Facial Action Coding System). Subjective self-report scales and one-way ANOVA are employed for evaluation. Results indicate that video-driven empathy significantly enhances users’ affective empathy (*p* < .001), facial naturalness, and expressive appropriateness; however, no significant improvement is observed in cognitive empathy. The core contribution is the first empirical validation of the decisive role of visual embodiment cues—particularly dynamic facial expressions—in eliciting affective empathy. Moreover, the study establishes empathy intensity as a viable and effective design dimension for empathic virtual agents, thereby advancing both theoretical modeling and practical interface design in affective human–computer interaction.
📝 Abstract
As artificial intelligence (AI) systems become increasingly embedded in everyday life, the ability of interactive agents to express empathy has become critical for effective human-AI interaction, particularly in emotionally sensitive contexts. Rather than treating empathy as a binary capability, this study examines how different levels of empathic expression in virtual human interaction influence user experience. We conducted a between-subject experiment (n = 70) in a counseling-style interaction context, comparing three virtual human conditions: a neutral dialogue-based agent, a dialogue-based empathic agent, and a video-based empathic agent that incorporates users' facial cues. Participants engaged in a 15-minute interaction and subsequently evaluated their experience using subjective measures of empathy and interaction quality. Results from analysis of variance (ANOVA) revealed significant differences across conditions in affective empathy, perceived naturalness of facial movement, and appropriateness of facial expression. The video-based empathic expression condition elicited significantly higher affective empathy than the neutral baseline (p < .001) and marginally higher levels than the dialogue-based condition (p < .10). In contrast, cognitive empathy did not differ significantly across conditions. These findings indicate that empathic expression in virtual humans should be conceptualized as a graded design variable, rather than a binary capability, with visually grounded cues playing a decisive role in shaping affective user experience.