FEEL: Quantifying Heterogeneity in Physiological Signals for Generalizable Emotion Recognition

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited generalization of emotion recognition models across heterogeneous physiological signals and the absence of a unified evaluation benchmark. To this end, the authors introduce FEEL, the first large-scale benchmark for emotion recognition based on electrodermal activity (EDA) and photoplethysmography (PPG), integrating 19 public datasets. They systematically evaluate the cross-dataset generalization performance of 16 models under diverse modeling paradigms and provide the first quantitative analysis of how heterogeneity in experimental setups, sensor types, and labeling strategies impacts generalization. Their findings reveal strong transfer potential between real-world data and expert annotations. Experiments show that the fine-tuned self-supervised model CLSP achieves the best performance in valence and arousal classification (71/114), handcrafted features consistently outperform end-to-end approaches, and cross-device, cross-labeling, and cross-scenario transfer yields F1 scores ranging from 0.72 to 0.81.
📝 Abstract
Emotion recognition from physiological signals has substantial potential for applications in mental health and emotion-aware systems. However, the lack of standardized, large-scale evaluations across heterogeneous datasets limits progress and model generalization. We introduce FEEL, the first large-scale benchmarking study of emotion recognition using electrodermal activity (EDA) and photoplethysmography (PPG) signals across 19 publicly available datasets. We evaluate 16 architectures spanning traditional machine learning, deep learning, and self-supervised pretraining approaches, structured into four representative modeling paradigms. Our study includes both within-dataset and cross-dataset evaluations, analyzing generalization across variations in experimental settings, device types, and labeling strategies. Our results showed that fine-tuned contrastive signal-language pretraining (CLSP) models (71/114) achieve the highest F1 across arousal and valence classification tasks, while simpler models like Random Forests, LDA, and MLP remain competitive (36/114). Models leveraging handcrafted features (107/114) consistently outperform those trained on raw signal segments, underscoring the value of domain knowledge in low-resource, noisy settings. Further cross-dataset analyses reveal that models trained on real-life setting data generalize well to lab (F1 = 0.79) and constraint-based settings (F1 = 0.78). Similarly, models trained on expert-annotated data transfer effectively to stimulus-labeled (F1 = 0.72) and self-reported datasets (F1 = 0.76). Moreover, models trained on lab-based devices also demonstrated high transferability to both custom wearable devices (F1 = 0.81) and the Empatica E4 (F1 = 0.73), underscoring the influence of heterogeneity. More information about FEEL can be found on our website https://alchemy18.github.io/FEEL_Benchmark/.
Problem

Research questions and friction points this paper is trying to address.

emotion recognition
physiological signals
dataset heterogeneity
model generalization
benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

emotion recognition
physiological signals
cross-dataset generalization
self-supervised pretraining
benchmarking
🔎 Similar Papers
No similar papers found.