Generative Deep Learning and Signal Processing for Data Augmentation of Cardiac Auscultation Signals: Improving Model Robustness Using Synthetic Audio

📅 2024-10-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address label scarcity, class imbalance, and insufficient model robustness in phonocardiogram (PCG) classification, this paper proposes a hybrid data augmentation framework integrating conventional audio augmentations (e.g., noise injection, time-stretching, filtering) with conditional generative diffusion modeling. We are the first to adapt WaveGrad and DiffWave for high-fidelity, class-conditional synthesis of PCG signals, thereby constructing an enriched training dataset. Furthermore, we introduce a multi-metric robustness evaluation framework centered on the Matthews Correlation Coefficient (MCC), jointly assessing in-distribution accuracy and out-of-distribution (OOD) generalization. Extensive experiments across multiple public PCG datasets demonstrate that our method significantly improves CNN-based classifiers’ accuracy, balanced accuracy, and MCC—effectively mitigating data bias and enhancing model stability under realistic clinical acoustic noise conditions.

Technology Category

Application Category

📝 Abstract
Accurately interpreting cardiac auscultation signals plays a crucial role in diagnosing and managing cardiovascular diseases. However, the paucity of labelled data inhibits classification models' training. Researchers have turned to generative deep learning techniques combined with signal processing to augment the existing data and improve cardiac auscultation classification models to overcome this challenge. However, the primary focus of prior studies has been on model performance as opposed to model robustness. Robustness, in this case, is defined as both the in-distribution and out-of-distribution performance by measures such as Matthew's correlation coefficient. This work shows that more robust abnormal heart sound classifiers can be trained using an augmented dataset. The augmentations consist of traditional audio approaches and the creation of synthetic audio conditionally generated using the WaveGrad and DiffWave diffusion models. It is found that both the in-distribution and out-of-distribution performance can be improved over various datasets when training a convolutional neural network-based classification model with this augmented dataset. With the performance increase encompassing not only accuracy but also balanced accuracy and Matthew's correlation coefficient, an augmented dataset significantly contributes to resolving issues of imbalanced datasets. This, in turn, helps provide a more general and robust classifier.
Problem

Research questions and friction points this paper is trying to address.

Addressing scarcity of labeled cardiac auscultation data
Improving robustness of heart sound classifiers
Enhancing performance with synthetic audio augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative deep learning for synthetic audio creation
Signal processing combined with WaveGrad and DiffWave
Augmented dataset improves CNN classifier robustness
🔎 Similar Papers
No similar papers found.
L
Leigh Abbott
School of Electrical Engineering, Computing, and Mathematical Sciences (EECMS), Faculty of Science and Engineering, Curtin University, Bentley 6102, WA, Australia
M
Milan Marocchi
School of Electrical Engineering, Computing, and Mathematical Sciences (EECMS), Faculty of Science and Engineering, Curtin University, Bentley 6102, WA, Australia
M
Matthew Fynn
School of Electrical Engineering, Computing, and Mathematical Sciences (EECMS), Faculty of Science and Engineering, Curtin University, Bentley 6102, WA, Australia
Yue Rong
Yue Rong
Professor at Curtin University
Electrical EngineeringSignal ProcessingWireless Communications
Sven Nordholm
Sven Nordholm
School of Electrical Engineering, Computing, and Mathematical Sciences (EECMS), Faculty of Science and Engineering, Curtin University, Bentley 6102, WA, Australia