๐ค AI Summary
Dynamic Facial Expression Recognition (DFER) faces two key challenges: insufficient exploitation of fine-grained affective cues from generated textual descriptions, and difficulty suppressing facial motions irrelevant to emotion. To address these, we propose GRACEโa framework that achieves token-level cross-modal alignment between linguistic cues and visually salient regions via coarse-to-fine emotional text enhancement and motion-difference-weighted attention. GRACE further incorporates dynamic motion modeling, semantic text refinement, and entropy-regularized optimal transport for precise spatiotemporal localization of emotion-relevant features. Evaluated on three benchmark datasets, GRACE achieves state-of-the-art performance, particularly improving recognition accuracy for ambiguous classes (e.g., โsurpriseโ vs. โfearโ) and long-tailed categories. It attains superior Unweighted Average Recall (UAR) and Weighted Average Recall (WAR) compared to existing methods.
๐ Abstract
Dynamic Facial Expression Recognition (DFER) aims to identify human emotions from temporally evolving facial movements and plays a critical role in affective computing. While recent vision-language approaches have introduced semantic textual descriptions to guide expression recognition, existing methods still face two key limitations: they often underutilize the subtle emotional cues embedded in generated text, and they have yet to incorporate sufficiently effective mechanisms for filtering out facial dynamics that are irrelevant to emotional expression. To address these gaps, We propose GRACE, Granular Representation Alignment for Cross-modal Emotion recognition that integrates dynamic motion modeling, semantic text refinement, and token-level cross-modal alignment to facilitate the precise localization of emotionally salient spatiotemporal features. Our method constructs emotion-aware textual descriptions via a Coarse-to-fine Affective Text Enhancement (CATE) module and highlights expression-relevant facial motion through a motion-difference weighting mechanism. These refined semantic and visual signals are aligned at the token level using entropy-regularized optimal transport. Experiments on three benchmark datasets demonstrate that our method significantly improves recognition performance, particularly in challenging settings with ambiguous or imbalanced emotion classes, establishing new state-of-the-art (SOTA) results in terms of both UAR and WAR.