🤖 AI Summary
Virtual avatars in MR/VR suffer from limited emotional expressiveness, and existing facial-tracking-based emotion recognition methods exhibit poor generalizability due to substantial inter-individual variability in facial expressions.
Method: This paper introduces the first multi-strategy personalized calibration framework for inclusive emotion recognition. It integrates real-time facial Action Unit (AU) dynamic modeling, a lightweight user self-calibration mechanism, and incremental transfer learning—jointly balancing population-level generalizability and user-specific expressivity to systematically mitigate recognition bias arising from cross-user expression diversity.
Contribution/Results: Evaluated on a diverse cohort spanning age, skin tone, and neurodiverse populations, the framework achieves a 23.6% improvement in emotion recognition accuracy and reduces F1-score variance by 41%, significantly enhancing model fairness, robustness, and inclusivity.
📝 Abstract
The limited expressiveness of virtual user representations in Mixed Reality and Virtual Reality can inhibit an integral part of communication: emotional expression. Emotion recognition based on face tracking is often used to compensate for this. However, emotional facial expressions are highly individual, which is why many approaches have difficulties recognizing unique variations of emotional expressions. We propose several strategies to improve face tracking systems for emotion recognition with and without user intervention for the Affective Interaction Workshop at CHI '25.