The Effect of Empathic Expression Levels in Virtual Human Interaction: A Controlled Experiment

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how varying intensities of empathic expression in virtual humans affect user experience in emotional counseling scenarios. Moving beyond binary empathic presence/absence, empathy is modeled as a tunable continuous variable. Three experimental conditions are designed: neutral dialogue, conversational empathy, and video-driven empathy (generated via the Facial Action Coding System). Subjective self-report scales and one-way ANOVA are employed for evaluation. Results indicate that video-driven empathy significantly enhances users’ affective empathy (*p* < .001), facial naturalness, and expressive appropriateness; however, no significant improvement is observed in cognitive empathy. The core contribution is the first empirical validation of the decisive role of visual embodiment cues—particularly dynamic facial expressions—in eliciting affective empathy. Moreover, the study establishes empathy intensity as a viable and effective design dimension for empathic virtual agents, thereby advancing both theoretical modeling and practical interface design in affective human–computer interaction.

Technology Category

Application Category

📝 Abstract
As artificial intelligence (AI) systems become increasingly embedded in everyday life, the ability of interactive agents to express empathy has become critical for effective human-AI interaction, particularly in emotionally sensitive contexts. Rather than treating empathy as a binary capability, this study examines how different levels of empathic expression in virtual human interaction influence user experience. We conducted a between-subject experiment (n = 70) in a counseling-style interaction context, comparing three virtual human conditions: a neutral dialogue-based agent, a dialogue-based empathic agent, and a video-based empathic agent that incorporates users' facial cues. Participants engaged in a 15-minute interaction and subsequently evaluated their experience using subjective measures of empathy and interaction quality. Results from analysis of variance (ANOVA) revealed significant differences across conditions in affective empathy, perceived naturalness of facial movement, and appropriateness of facial expression. The video-based empathic expression condition elicited significantly higher affective empathy than the neutral baseline (p < .001) and marginally higher levels than the dialogue-based condition (p < .10). In contrast, cognitive empathy did not differ significantly across conditions. These findings indicate that empathic expression in virtual humans should be conceptualized as a graded design variable, rather than a binary capability, with visually grounded cues playing a decisive role in shaping affective user experience.
Problem

Research questions and friction points this paper is trying to address.

Examines how varying empathic expression levels affect user experience in virtual human interactions
Compares neutral, dialogue-based, and video-based empathic agents in a counseling-style context
Assesses impact on affective empathy, perceived naturalness, and appropriateness of facial expressions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video-based empathic agent uses facial cues
Compares neutral, dialogue, and video empathic agents
Empathy as graded design variable, not binary
🔎 Similar Papers
No similar papers found.
S
Sung Park
Human-Centric AI Center, Taejae University
D
Daeho Yoon
Human-Centric AI Center, Taejae University
Jungmin Lee
Jungmin Lee
Seoul National University, Korea and IZA, Germany
Labor EconomicsApplied Microeconomics