🤖 AI Summary
Phonocardiogram (PCG) classification suffers from label scarcity and poor model generalization, particularly under out-of-distribution (OOD) conditions. Method: This work systematically evaluates the impact of audio augmentation strategies on representation quality in self-supervised contrastive learning (SSL) for PCG analysis. It conducts the first large-scale empirical study of augmentations in PCG—including time-frequency masking, time stretching, additive noise, and phase perturbation—and introduces an effect-size–based criterion for augmentation selection. Contribution/Results: The optimal augmentation combination improves SSL representation robustness, reducing accuracy degradation on OOD data to only 10% (versus 32% for supervised baselines). This yields substantial gains in downstream classification performance and establishes a reproducible, principled framework for augmentation optimization in low-resource PCG analysis.
📝 Abstract
Despite recent advancements in deep learning, its application in real-world medical settings, such as phonocardiogram (PCG) classification, remains limited. A significant barrier is the lack of high-quality annotated datasets, which hampers the development of robust, generalizable models that can perform well on newly collected, out-of-distribution (OOD) data. Self-Supervised Learning (SSL), particularly contrastive learning, has shown promise in mitigating the issue of data scarcity by leveraging unlabeled data to enhance model robustness and effectiveness. Even though SSL methods have been proposed and researched in other domains, works focusing on the impact of data augmentations on model robustness for PCG classification is limited. In particular, while augmentations are a key component in SSL, selecting the most suitable transformations during the training process is highly challenging and time-consuming. Improper augmentations can lead to substantial performance degradation, even hindering the network’s ability to learn meaningful representations. Addressing this gap, our research aims to explore and evaluate a wide range of audio-based augmentations and uncover combinations that enhance SSL model performance in PCG classification. We conduct a comprehensive comparative analysis across multiple datasets and downstream tasks, assessing the impact of various augmentations on model performance and generalization. Our findings reveal that depending on the training distribution, augmentation choice significantly influences model robustness, with fully-supervised models experiencing up to a 32% drop in effectiveness when applied to unseen data, while SSL models demonstrate greater resilience, losing only 10% or even improving in some cases. This study also sheds light on the most promising and appropriate augmentations for robust PCG signal processing, by calculating their effect size on model training. These insights equip researchers and practitioners with valuable guidelines for building more robust, reliable models in PCG signal processing.