🤖 AI Summary
To address the robustness degradation of multimodal affective computing under real-world EEG data missingness, this paper proposes a two-stage joint multimodal learning framework. First, a self-supervised Joint Embedding of Speech and EEG (JEC-SSL) is introduced to learn cross-modal consistent representations. Second, an Extended Deep Canonical Correlation Cross-Modal Autoencoder (E-DCC-CAE) is designed to map EEG-guided shared affective representations into the speech domain. During training, the framework leverages bimodal data; at inference, it achieves high-accuracy emotion recognition using speech alone. Experiments demonstrate that the method attains performance close to full-modality baselines under EEG absence, significantly improves unimodal speech-based accuracy, and validates effectiveness and generalizability across multiple benchmark datasets. The core contribution lies in the first successful realization of transferable EEG guidance for speech affective representation learning, thereby resolving the reliability-feasibility trade-off under modality missingness.
📝 Abstract
Computer interfaces are advancing towards using multi-modalities to enable better human-computer interactions. The use of automatic emotion recognition (AER) can make the interactions natural and meaningful thereby enhancing the user experience. Though speech is the most direct and intuitive modality for AER, it is not reliable because it can be intentionally faked by humans. On the other hand, physiological modalities like EEG, are more reliable and impossible to fake. However, use of EEG is infeasible for realistic scenarios usage because of the need for specialized recording setup. In this paper, one of our primary aims is to ride on the reliability of the EEG modality to facilitate robust AER on the speech modality. Our approach uses both the modalities during training to reliably identify emotion at the time of inference, even in the absence of the more reliable EEG modality. We propose, a two-step joint multi-modal learning approach (JMML) that exploits both the intra- and inter- modal characteristics to construct emotion embeddings that enrich the performance of AER. In the first step, using JEC-SSL, intra-modal learning is done independently on the individual modalities. This is followed by an inter-modal learning using the proposed extended variant of deep canonically correlated cross-modal autoencoder (E-DCC-CAE). The approach learns the joint properties of both the modalities by mapping them into a common representation space, such that the modalities are maximally correlated. These emotion embeddings, hold properties of both the modalities there by enhancing the performance of ML classifier used for AER. Experimental results show the efficacy of the proposed approach. To best of our knowledge, this is the first attempt to combine speech and EEG with joint multi-modal learning approach for reliable AER.