🤖 AI Summary
This study addresses the challenge of objective, continuous pain assessment by proposing a multi-representation graph-based electrocortical signal fusion method for automated pain-state identification. Methodologically, a single electrophysiological signal (e.g., EEG) is transformed via time-frequency analysis into complementary representations—such as time-domain waveforms, wavelet time-frequency spectra, and Hilbert marginal spectra—which are jointly encoded into a single multi-channel image to enable intra-modal high-order feature integration. A lightweight CNN classifier is then trained on this image representation. Unlike conventional cross-modal signal-level or feature-level fusion approaches, the proposed framework circumvents heterogeneous signal alignment difficulties, thereby enhancing physiological representational consistency and robustness. Experimental results demonstrate superior performance over state-of-the-art fusion methods in classification accuracy, F1-score, and cross-subject generalizability, indicating strong potential for real-time clinical pain monitoring.
📝 Abstract
Pain is a multifaceted phenomenon that affects a substantial portion of the population. Reliable and consistent evaluation benefits those experiencing pain and underpins the development of effective and advanced management strategies. Automatic pain-assessment systems deliver continuous monitoring, inform clinical decision-making, and aim to reduce distress while preventing functional decline. By incorporating physiological signals, these systems provide objective, accurate insights into an individual's condition. This study has been submitted to the extit{Second Multimodal Sensing Grand Challenge for Next-Gen Pain Assessment (AI4PAIN)}. The proposed method introduces a pipeline that leverages electrodermal activity signals as input modality. Multiple representations of the signal are created and visualized as waveforms, and they are jointly visualized within a single multi-representation diagram. Extensive experiments incorporating various processing and filtering techniques, along with multiple representation combinations, demonstrate the effectiveness of the proposed approach. It consistently yields comparable, and in several cases superior, results to traditional fusion methods, establishing it as a robust alternative for integrating different signal representations or modalities.