Explainable Transformer-CNN Fusion for Noise-Robust Speech Emotion Recognition

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of performance degradation and lack of interpretability in Speech Emotion Recognition (SER) under real-world noise, this paper proposes an end-to-end interpretable dual-stream fusion framework. One stream models temporal dependencies in raw waveforms using Wav2Vec 2.0 augmented with a 1D-CNN; the other extracts noise-robust handcrafted spectral features (MFCCs, ZCR, RMSE). These streams are dynamically fused via Attentive Temporal Pooling. We introduce the first Transformer-CNN hybrid dual-stream architecture for SER and pioneer the systematic integration of SHAP (feature-level) and Score-CAM (time-frequency-level) for cross-modal attributional interpretability. The method achieves state-of-the-art accuracy on RAVDESS, TESS, SAVEE, and CREMA-D. Under real-noise conditions (SAS-KIIT), it demonstrates significantly superior robustness over single-stream baselines. Visual analysis confirms effective transfer of attention mechanisms across domains.

Technology Category

Application Category

📝 Abstract
Speech Emotion Recognition (SER) systems often degrade in performance when exposed to the unpredictable acoustic interference found in real-world environments. Additionally, the opacity of deep learning models hinders their adoption in trust-sensitive applications. To bridge this gap, we propose a Hybrid Transformer-CNN framework that unifies the contextual modeling of Wav2Vec 2.0 with the spectral stability of 1D-Convolutional Neural Networks. Our dual-stream architecture processes raw waveforms to capture long-range temporal dependencies while simultaneously extracting noise-resistant spectral features (MFCC, ZCR, RMSE) via a custom Attentive Temporal Pooling mechanism. We conducted extensive validation across four diverse benchmark datasets: RAVDESS, TESS, SAVEE, and CREMA-D. To rigorously test robustness, we subjected the model to non-stationary acoustic interference using real-world noise profiles from the SAS-KIIT dataset. The proposed framework demonstrates superior generalization and state-of-the-art accuracy across all datasets, significantly outperforming single-branch baselines under realistic environmental interference. Furthermore, we address the ``black-box" problem by integrating SHAP and Score-CAM into the evaluation pipeline. These tools provide granular visual explanations, revealing how the model strategically shifts attention between temporal and spectral cues to maintain reliability in the presence of complex environmental noise.
Problem

Research questions and friction points this paper is trying to address.

Enhances speech emotion recognition in noisy real-world environments
Improves model transparency for trust-sensitive applications
Unifies contextual and spectral features for robust performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Transformer-CNN framework for contextual and spectral fusion
Attentive Temporal Pooling for noise-resistant feature extraction
SHAP and Score-CAM integration for model explainability
🔎 Similar Papers
No similar papers found.
S
Sudip Chakrabarty
School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India; Amygdala-AI India Lab, Bhubaneswar, India
P
Pappu Bishwas
School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India; Amygdala-AI India Lab, Bhubaneswar, India
Rajdeep Chatterjee
Rajdeep Chatterjee
Professor of Physics, IIT Roorkee
Theoretical Nuclear physics