Leveraging Vision Transformers for Enhanced Classification of Emotions using ECG Signals

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses ECG-based emotion recognition by proposing an end-to-end visual representation method leveraging a hybrid CNN-SE-ViT architecture. First, raw ECG signals are transformed into time-frequency images via joint continuous wavelet transform (CWT) and power spectral density (PSD) mapping. Subsequently, a fused model is introduced: a convolutional neural network (CNN) captures local texture patterns, a squeeze-and-excitation (SE) module enhances channel-wise attention, and a Vision Transformer (ViT) models long-range spatial dependencies. To the best of our knowledge, this is the first work to systematically integrate ViT into ECG-derived image emotion recognition. Evaluated on the YAAD and DREAMER datasets, the method achieves state-of-the-art performance in both seven-class discrete emotion classification and three-dimensional continuous valence-arousal-dominance (VAD) regression. Results demonstrate the efficacy and generalizability of visualizing physiological signals and employing hybrid vision architectures for affective computing.

Technology Category

Application Category

📝 Abstract
Biomedical signals provide insights into various conditions affecting the human body. Beyond diagnostic capabilities, these signals offer a deeper understanding of how specific organs respond to an individual's emotions and feelings. For instance, ECG data can reveal changes in heart rate variability linked to emotional arousal, stress levels, and autonomic nervous system activity. This data offers a window into the physiological basis of our emotional states. Recent advancements in the field diverge from conventional approaches by leveraging the power of advanced transformer architectures, which surpass traditional machine learning and deep learning methods. We begin by assessing the effectiveness of the Vision Transformer (ViT), a forefront model in image classification, for identifying emotions in imaged ECGs. Following this, we present and evaluate an improved version of ViT, integrating both CNN and SE blocks, aiming to bolster performance on imaged ECGs associated with emotion detection. Our method unfolds in two critical phases: first, we apply advanced preprocessing techniques for signal purification and converting signals into interpretable images using continuous wavelet transform and power spectral density analysis; second, we unveil a performance-boosted vision transformer architecture, cleverly enhanced with convolutional neural network components, to adeptly tackle the challenges of emotion recognition. Our methodology's robustness and innovation were thoroughly tested using ECG data from the YAAD and DREAMER datasets, leading to remarkable outcomes. For the YAAD dataset, our approach outperformed existing state-of-the-art methods in classifying seven unique emotional states, as well as in valence and arousal classification. Similarly, in the DREAMER dataset, our method excelled in distinguishing between valence, arousal and dominance, surpassing current leading techniques.
Problem

Research questions and friction points this paper is trying to address.

Classifying human emotions using ECG signals through advanced transformer architectures
Improving emotion recognition by integrating CNN components with Vision Transformers
Enhancing classification of valence, arousal and dominance from ECG data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision Transformer adapted for ECG emotion classification
Hybrid ViT-CNN-SE architecture boosts emotion recognition
Advanced preprocessing converts ECG signals to images
P
Pubudu L. Indrasiri
School of Engineering, Deakin University, 75, Pigdons Rd, Waurn Ponds, 3216, VIC, Australia
B
Bipasha Kashyap
School of Engineering, Deakin University, 75, Pigdons Rd, Waurn Ponds, 3216, VIC, Australia
Pubudu N. Pathirana
Pubudu N. Pathirana
Professor, Head of Discipline, Mechatronics, E&E Engineering, Deakin University
Human Motion CaptureAssistive Device DesignComputer NetworksMachine Learning