From Coarse to Nuanced: Cross-Modal Alignment of Fine-Grained Linguistic Cues and Visual Salient Regions for Dynamic Emotion Recognition

๐Ÿ“… 2025-07-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Dynamic Facial Expression Recognition (DFER) faces two key challenges: insufficient exploitation of fine-grained affective cues from generated textual descriptions, and difficulty suppressing facial motions irrelevant to emotion. To address these, we propose GRACEโ€”a framework that achieves token-level cross-modal alignment between linguistic cues and visually salient regions via coarse-to-fine emotional text enhancement and motion-difference-weighted attention. GRACE further incorporates dynamic motion modeling, semantic text refinement, and entropy-regularized optimal transport for precise spatiotemporal localization of emotion-relevant features. Evaluated on three benchmark datasets, GRACE achieves state-of-the-art performance, particularly improving recognition accuracy for ambiguous classes (e.g., โ€œsurpriseโ€ vs. โ€œfearโ€) and long-tailed categories. It attains superior Unweighted Average Recall (UAR) and Weighted Average Recall (WAR) compared to existing methods.

Technology Category

Application Category

๐Ÿ“ Abstract
Dynamic Facial Expression Recognition (DFER) aims to identify human emotions from temporally evolving facial movements and plays a critical role in affective computing. While recent vision-language approaches have introduced semantic textual descriptions to guide expression recognition, existing methods still face two key limitations: they often underutilize the subtle emotional cues embedded in generated text, and they have yet to incorporate sufficiently effective mechanisms for filtering out facial dynamics that are irrelevant to emotional expression. To address these gaps, We propose GRACE, Granular Representation Alignment for Cross-modal Emotion recognition that integrates dynamic motion modeling, semantic text refinement, and token-level cross-modal alignment to facilitate the precise localization of emotionally salient spatiotemporal features. Our method constructs emotion-aware textual descriptions via a Coarse-to-fine Affective Text Enhancement (CATE) module and highlights expression-relevant facial motion through a motion-difference weighting mechanism. These refined semantic and visual signals are aligned at the token level using entropy-regularized optimal transport. Experiments on three benchmark datasets demonstrate that our method significantly improves recognition performance, particularly in challenging settings with ambiguous or imbalanced emotion classes, establishing new state-of-the-art (SOTA) results in terms of both UAR and WAR.
Problem

Research questions and friction points this paper is trying to address.

Align fine-grained text and visual cues for emotion recognition
Filter irrelevant facial dynamics in emotion expression
Improve recognition in ambiguous or imbalanced emotion classes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Granular cross-modal alignment of text and visuals
Coarse-to-fine affective text enhancement module
Motion-difference weighting for relevant facial dynamics
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yu Liu
Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, China
Leyuan Qu
Leyuan Qu
Hangzhou Institute for Advanced Study, UCAS
Speech Representation LearningMulti-modal Learning and Affective Computing
H
Hanlei Shi
Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, China
D
Di Gao
Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, China
Y
Yuhua Zheng
Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, China
T
Taihao Li
Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, China