🤖 AI Summary
Multimodal sentiment analysis faces significant challenges in high computational complexity of cross-modal interactions and insufficient modeling of dynamic emotional shifts in conversations. This work proposes CAGMamba, a novel framework that introduces the Mamba state space model to this domain for the first time. By constructing a temporal binary sequence of context and current utterance, it explicitly captures sentiment evolution over time. Furthermore, a gated cross-modal Mamba network is designed to enable efficient and controllable modality fusion, enhancing inter-modal information exchange while preserving modality-specific characteristics. Integrated with a three-branch multi-task learning strategy, the model achieves state-of-the-art or highly competitive performance across three benchmark datasets, substantially advancing the effectiveness of multimodal sentiment analysis.
📝 Abstract
Multimodal Sentiment Analysis (MSA) requires effective modeling of cross-modal interactions and contextual dependencies while remaining computationally efficient. Existing fusion approaches predominantly rely on Transformer-based cross-modal attention, which incurs quadratic complexity with respect to sequence length and limits scalability. Moreover, contextual information from preceding utterances is often incorporated through concatenation or independent fusion, without explicit temporal modeling that captures sentiment evolution across dialogue turns. To address these limitations, we propose CAGMamba, a context-aware gated cross-modal Mamba framework for dialogue-based sentiment analysis. Specifically, we organize the contextual and the current-utterance features into a temporally ordered binary sequence, which provides Mamba with explicit temporal structure for modeling sentiment evolution. To further enable controllable cross-modal integration, we propose a Gated Cross-Modal Mamba Network (GCMN) that integrates cross-modal and unimodal paths via learnable gating to balance information fusion and modality preservation, and is trained with a three-branch multi-task objective over text, audio, and fused predictions. Experiments on three benchmark datasets demonstrate that CAGMamba achieves state-of-the-art or competitive results across multiple evaluation metrics. All codes are available at https://github.com/User2024-xj/CAGMamba.