TAGF: Time-aware Gated Fusion for Multimodal Valence-Arousal Estimation

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multimodal valence-arousal estimation, performance degradation arises from modality-specific noise and temporal misalignment between audio and visual streams. To address this, we propose a time-aware gated fusion framework. Our method employs a BiLSTM-based temporal gating mechanism that adaptively modulates recurrent attention outputs, explicitly modeling dynamic cross-modal interactions and temporal evolution of features. Furthermore, we introduce a multi-step cross-modal feature integration strategy to enhance robustness against asynchronous modalities. Experiments on the Aff-Wild2 benchmark demonstrate that our approach achieves state-of-the-art performance. Notably, it exhibits significantly improved stability under temporal misalignment conditions compared to baseline models, enabling more reliable capture of fine-grained, time-varying emotional dynamics in real-world videos.

Technology Category

Application Category

📝 Abstract
Multimodal emotion recognition often suffers from performance degradation in valence-arousal estimation due to noise and misalignment between audio and visual modalities. To address this challenge, we introduce TAGF, a Time-aware Gated Fusion framework for multimodal emotion recognition. The TAGF adaptively modulates the contribution of recursive attention outputs based on temporal dynamics. Specifically, the TAGF incorporates a BiLSTM-based temporal gating mechanism to learn the relative importance of each recursive step and effectively integrates multistep cross-modal features. By embedding temporal awareness into the recursive fusion process, the TAGF effectively captures the sequential evolution of emotional expressions and the complex interplay between modalities. Experimental results on the Aff-Wild2 dataset demonstrate that TAGF achieves competitive performance compared with existing recursive attention-based models. Furthermore, TAGF exhibits strong robustness to cross-modal misalignment and reliably models dynamic emotional transitions in real-world conditions.
Problem

Research questions and friction points this paper is trying to address.

Addresses noise and misalignment in multimodal emotion recognition
Adaptively modulates attention outputs using temporal dynamics
Improves robustness to cross-modal misalignment in real-world conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Time-aware gated fusion for multimodal emotion
BiLSTM-based temporal gating mechanism
Adaptive modulation of recursive attention outputs
🔎 Similar Papers
No similar papers found.