TACFN: Transformer-based Adaptive Cross-modal Fusion Network for Multimodal Emotion Recognition

📅 2025-05-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address redundant feature interference and insufficient complementary information modeling in cross-modal fusion for multimodal sentiment recognition, this paper proposes a Transformer-based adaptive selective cross-modal fusion architecture. Our method introduces three key innovations: (1) an intra-modal self-attention-driven feature selection mechanism to suppress redundant representations; (2) a dynamic concatenation-weighting strategy to generate cross-modal interaction weights, explicitly enhancing complementary feature modeling; and (3) joint optimization of multimodal representation alignment and fusion. Evaluated on the RAVDESS and IEMOCAP benchmarks, our approach achieves state-of-the-art performance, significantly outperforming existing methods. The source code and pre-trained models are publicly released.

Technology Category

Application Category

📝 Abstract
The fusion technique is the key to the multimodal emotion recognition task. Recently, cross-modal attention-based fusion methods have demonstrated high performance and strong robustness. However, cross-modal attention suffers from redundant features and does not capture complementary features well. We find that it is not necessary to use the entire information of one modality to reinforce the other during cross-modal interaction, and the features that can reinforce a modality may contain only a part of it. To this end, we design an innovative Transformer-based Adaptive Cross-modal Fusion Network (TACFN). Specifically, for the redundant features, we make one modality perform intra-modal feature selection through a self-attention mechanism, so that the selected features can adaptively and efficiently interact with another modality. To better capture the complementary information between the modalities, we obtain the fused weight vector by splicing and use the weight vector to achieve feature reinforcement of the modalities. We apply TCAFN to the RAVDESS and IEMOCAP datasets. For fair comparison, we use the same unimodal representations to validate the effectiveness of the proposed fusion method. The experimental results show that TACFN brings a significant performance improvement compared to other methods and reaches the state-of-the-art. All code and models could be accessed from https://github.com/shuzihuaiyu/TACFN.
Problem

Research questions and friction points this paper is trying to address.

Reduces redundant features in cross-modal emotion recognition
Improves complementary feature capture between modalities
Enhances multimodal fusion efficiency via adaptive interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based adaptive cross-modal fusion network
Intra-modal feature selection via self-attention mechanism
Spliced weight vector for complementary feature reinforcement
F
Feng Liu
School of Computer Science and Technology, East China Normal University, Beijing 100084, China
Z
Ziwang Fu
MTlab, Meitu (China) Limited, Beijing 100876, China
Y
Yunlong Wang
Institute of Acoustics, University of Chinese Academy of Sciences, Beijing 100084, China
Qijian Zheng
Qijian Zheng
Fudan University