🤖 AI Summary
This study addresses emotion recognition on resource-constrained wearable devices using single-lead ECG signals, enabling real-time, low-power binary classification of positive (e.g., joy, tenderness, gratitude) versus negative (e.g., sadness, disgust, anger) emotions. Methodologically, we propose a multi-domain feature extraction framework (time-, frequency-, and nonlinear-domain), selective feature fusion, and hybrid feature selection, coupled with a comparative paradigm of personalized versus generic ensemble models; ECG segments are processed independently and classified via majority voting. Experimental results show that the personalized model achieves a mean accuracy of 95.59%, significantly outperforming the generic model (69.92%)—the highest reported accuracy in comparable studies. The key contribution is the empirical validation that high-accuracy, personalized emotion recognition is feasible using ECG alone, while maintaining low computational overhead and strict real-time constraints—thereby enabling practical, continuous affective monitoring at the edge.
📝 Abstract
Negative emotions are linked to the onset of neurodegenerative diseases and dementia, yet they are often difficult to detect through observation. Physiological signals from wearable devices offer a promising noninvasive method for continuous emotion monitoring. In this study, we propose a lightweight, resource-efficient machine learning approach for binary emotion classification, distinguishing between negative (sadness, disgust, anger) and positive (amusement, tenderness, gratitude) affective states using only electrocardiography (ECG) signals. The method is designed for deployment in resource-constrained systems, such as Internet of Things (IoT) devices, by reducing battery consumption and cloud data transmission through the avoidance of computationally expensive multimodal inputs. We utilized ECG data from 218 CSV files extracted from four studies in the Psychophysiology of Positive and Negative Emotions (POPANE) dataset, which comprises recordings from 1,157 healthy participants across seven studies. Each file represents a unique subject emotion, and the ECG signals, recorded at 1000 Hz, were segmented into 10-second epochs to reflect real-world usage. Our approach integrates multidomain feature extraction, selective feature fusion, and a voting classifier. We evaluated it using a participant-exclusive generalized model and a participant-inclusive personalized model. The personalized model achieved the best performance, with an average accuracy of 95.59%, outperforming the generalized model, which reached 69.92% accuracy. Comparisons with other studies on the POPANE and similar datasets show that our approach consistently outperforms existing methods. This work highlights the effectiveness of personalized models in emotion recognition and their suitability for wearable applications that require accurate, low-power, and real-time emotion tracking.