🤖 AI Summary
Early screening for cardiovascular diseases demands high-accuracy, low-cost auscultation assistance, yet deep learning models are hindered by the scarcity of synchronized multimodal (PCG+ECG) and multi-channel phonocardiogram (PCG) data. To address this, we propose a novel data augmentation framework integrating signal processing with denoising diffusion probabilistic models (WaveGrad/DiffWave), and—crucially—successfully adapt the self-supervised speech model Wav2Vec 2.0 to multimodal, multi-channel heart sound classification for the first time. Our method synthesizes high-fidelity augmented samples and jointly models temporal dynamics of PCG and ECG signals. On the CinC 2016 single-channel benchmark, our model achieves 92.48% accuracy (UAR: 93.05%); on synchronized PCG–ECG data, it attains an MCC of 0.8380. It also significantly outperforms state-of-the-art methods in multi-channel classification. Our core contributions are: (i) the first effective transfer of Wav2Vec 2.0 to non-speech biomedical signals, and (ii) an end-to-end, generalizable paradigm for multi-source heart sound analysis.
📝 Abstract
Cardiovascular diseases (CVDs) are the leading cause of death worldwide, accounting for approximately 17.9 million deaths each year. Early detection is critical, creating a demand for accurate and inexpensive pre-screening methods. Deep learning has recently been applied to classify abnormal heart sounds indicative of CVDs using synchronised phonocardiogram (PCG) and electrocardiogram (ECG) signals, as well as multichannel PCG (mPCG). However, state-of-the-art architectures remain underutilised due to the limited availability of synchronised and multichannel datasets. Augmented datasets and pre-trained models provide a pathway to overcome these limitations, enabling transformer-based architectures to be trained effectively. This work combines traditional signal processing with denoising diffusion models, WaveGrad and DiffWave, to create an augmented dataset to fine-tune a Wav2Vec 2.0-based classifier on multimodal and multichannel heart sound datasets. The approach achieves state-of-the-art performance. On the Computing in Cardiology (CinC) 2016 dataset of single channel PCG, accuracy, unweighted average recall (UAR), sensitivity, specificity and Matthew's correlation coefficient (MCC) reach 92.48%, 93.05%, 93.63%, 92.48%, 94.93% and 0.8283, respectively. Using the synchronised PCG and ECG signals of the training-a dataset from CinC, 93.14%, 92.21%, 94.35%, 90.10%, 95.12% and 0.8380 are achieved for accuracy, UAR, sensitivity, specificity and MCC, respectively. Using a wearable vest dataset consisting of mPCG data, the model achieves 77.13% accuracy, 74.25% UAR, 86.47% sensitivity, 62.04% specificity, and 0.5082 MCC. These results demonstrate the effectiveness of transformer-based models for CVD detection when supported by augmented datasets, highlighting their potential to advance multimodal and multichannel heart sound classification.