Benchmarking and Bridging Emotion Conflicts for Multimodal Emotion Reasoning

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) overlook inter-modal affective conflicts (e.g., audio-visual emotion incongruence), leading to audio-dominant bias in multimodal emotion reasoning (MER). To address this, we propose CA-MER—the first benchmark explicitly designed to evaluate MER performance under conflicting modality conditions—and introduce MoSEAR, a novel framework comprising: (1) modality-specific experts (MoSE) for fine-grained semantic modeling per modality; (2) an attention redistribution (AR) mechanism that dynamically recalibrates cross-modal weights; and (3) a regularized gating module enabling parameter-efficient, unbiased fusion. MoSEAR operates with a frozen backbone and lightweight head tuning, ensuring both fairness and efficiency. Experiments demonstrate state-of-the-art performance across MER2023, EMER, DFEW, and CA-MER, with a +5.2% average accuracy gain on conflict samples while preserving performance on congruent samples.

Technology Category

Application Category

📝 Abstract
Despite their strong performance in multimodal emotion reasoning, existing Multimodal Large Language Models (MLLMs) often overlook the scenarios involving emotion conflicts, where emotional cues from different modalities are inconsistent. To fill this gap, we first introduce CA-MER, a new benchmark designed to examine MLLMs under realistic emotion conflicts. It consists of three subsets: video-aligned, audio-aligned, and consistent, where only one or all modalities reflect the true emotion. However, evaluations on our CA-MER reveal that current state-of-the-art emotion MLLMs systematically over-rely on audio signal during emotion conflicts, neglecting critical cues from visual modality. To mitigate this bias, we propose MoSEAR, a parameter-efficient framework that promotes balanced modality integration. MoSEAR consists of two modules: (1)MoSE, modality-specific experts with a regularized gating mechanism that reduces modality bias in the fine-tuning heads; and (2)AR, an attention reallocation mechanism that rebalances modality contributions in frozen backbones during inference. Our framework offers two key advantages: it mitigates emotion conflicts and improves performance on consistent samples-without incurring a trade-off between audio and visual modalities. Experiments on multiple benchmarks-including MER2023, EMER, DFEW, and our CA-MER-demonstrate that MoSEAR achieves state-of-the-art performance, particularly under modality conflict conditions.
Problem

Research questions and friction points this paper is trying to address.

Examines MLLMs' handling of inconsistent emotional cues across modalities
Addresses bias toward audio signals in current emotion MLLMs
Proposes framework for balanced modality integration in emotion reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces CA-MER benchmark for emotion conflict scenarios
Proposes MoSEAR framework for balanced modality integration
Uses regularized gating and attention reallocation mechanisms
🔎 Similar Papers
No similar papers found.