🤖 AI Summary
Existing multimodal emotion recognition (MER) methods overlook inter-modal emotional conflicts and suffer from training bias induced by unified supervision labels, particularly on highly inconsistent samples. To address this, we propose TiCAL—a typicality-guided, consistency-aware framework that emulates the human staged emotional perception mechanism. TiCAL innovatively integrates pseudo-unimodal emotion label generation, dynamic modality-consistency assessment, and typicality estimation, all within a hyperbolic hypersphere space for fine-grained emotional representation. A consistency-weighted loss function optimizes multimodal fusion, effectively mitigating supervision noise from conflicting samples. Extensive experiments demonstrate that TiCAL significantly outperforms state-of-the-art methods—e.g., surpassing DMD by 2.6% on CMU-MOSEI and MER2023 benchmarks—with particularly strong gains on high-conflict samples. This work establishes a novel paradigm for robust, conflict-aware multimodal emotion modeling.
📝 Abstract
Multimodal Emotion Recognition (MER) aims to accurately identify human emotional states by integrating heterogeneous modalities such as visual, auditory, and textual data. Existing approaches predominantly rely on unified emotion labels to supervise model training, often overlooking a critical challenge: inter-modal emotion conflicts, wherein different modalities within the same sample may express divergent emotional tendencies. In this work, we address this overlooked issue by proposing a novel framework, Typicality-based Consistent-aware Multimodal Emotion Recognition (TiCAL), inspired by the stage-wise nature of human emotion perception. TiCAL dynamically assesses the consistency of each training sample by leveraging pseudo unimodal emotion labels alongside a typicality estimation. To further enhance emotion representation, we embed features in a hyperbolic space, enabling the capture of fine-grained distinctions among emotional categories. By incorporating consistency estimates into the learning process, our method improves model performance, particularly on samples exhibiting high modality inconsistency. Extensive experiments on benchmark datasets, e.g, CMU-MOSEI and MER2023, validate the effectiveness of TiCAL in mitigating inter-modal emotional conflicts and enhancing overall recognition accuracy, e.g., with about 2.6% improvements over the state-of-the-art DMD.