🤖 AI Summary
To address the dual challenges of performance degradation and lack of interpretability in Speech Emotion Recognition (SER) under real-world noise, this paper proposes an end-to-end interpretable dual-stream fusion framework. One stream models temporal dependencies in raw waveforms using Wav2Vec 2.0 augmented with a 1D-CNN; the other extracts noise-robust handcrafted spectral features (MFCCs, ZCR, RMSE). These streams are dynamically fused via Attentive Temporal Pooling. We introduce the first Transformer-CNN hybrid dual-stream architecture for SER and pioneer the systematic integration of SHAP (feature-level) and Score-CAM (time-frequency-level) for cross-modal attributional interpretability. The method achieves state-of-the-art accuracy on RAVDESS, TESS, SAVEE, and CREMA-D. Under real-noise conditions (SAS-KIIT), it demonstrates significantly superior robustness over single-stream baselines. Visual analysis confirms effective transfer of attention mechanisms across domains.
📝 Abstract
Speech Emotion Recognition (SER) systems often degrade in performance when exposed to the unpredictable acoustic interference found in real-world environments. Additionally, the opacity of deep learning models hinders their adoption in trust-sensitive applications. To bridge this gap, we propose a Hybrid Transformer-CNN framework that unifies the contextual modeling of Wav2Vec 2.0 with the spectral stability of 1D-Convolutional Neural Networks. Our dual-stream architecture processes raw waveforms to capture long-range temporal dependencies while simultaneously extracting noise-resistant spectral features (MFCC, ZCR, RMSE) via a custom Attentive Temporal Pooling mechanism. We conducted extensive validation across four diverse benchmark datasets: RAVDESS, TESS, SAVEE, and CREMA-D. To rigorously test robustness, we subjected the model to non-stationary acoustic interference using real-world noise profiles from the SAS-KIIT dataset. The proposed framework demonstrates superior generalization and state-of-the-art accuracy across all datasets, significantly outperforming single-branch baselines under realistic environmental interference. Furthermore, we address the ``black-box" problem by integrating SHAP and Score-CAM into the evaluation pipeline. These tools provide granular visual explanations, revealing how the model strategically shifts attention between temporal and spectral cues to maintain reliability in the presence of complex environmental noise.