🤖 AI Summary
In multimodal valence-arousal estimation, performance degradation arises from modality-specific noise and temporal misalignment between audio and visual streams. To address this, we propose a time-aware gated fusion framework. Our method employs a BiLSTM-based temporal gating mechanism that adaptively modulates recurrent attention outputs, explicitly modeling dynamic cross-modal interactions and temporal evolution of features. Furthermore, we introduce a multi-step cross-modal feature integration strategy to enhance robustness against asynchronous modalities. Experiments on the Aff-Wild2 benchmark demonstrate that our approach achieves state-of-the-art performance. Notably, it exhibits significantly improved stability under temporal misalignment conditions compared to baseline models, enabling more reliable capture of fine-grained, time-varying emotional dynamics in real-world videos.
📝 Abstract
Multimodal emotion recognition often suffers from performance degradation in valence-arousal estimation due to noise and misalignment between audio and visual modalities. To address this challenge, we introduce TAGF, a Time-aware Gated Fusion framework for multimodal emotion recognition. The TAGF adaptively modulates the contribution of recursive attention outputs based on temporal dynamics. Specifically, the TAGF incorporates a BiLSTM-based temporal gating mechanism to learn the relative importance of each recursive step and effectively integrates multistep cross-modal features. By embedding temporal awareness into the recursive fusion process, the TAGF effectively captures the sequential evolution of emotional expressions and the complex interplay between modalities. Experimental results on the Aff-Wild2 dataset demonstrate that TAGF achieves competitive performance compared with existing recursive attention-based models. Furthermore, TAGF exhibits strong robustness to cross-modal misalignment and reliably models dynamic emotional transitions in real-world conditions.