Towards Pre-training an Effective Respiratory Audio Foundation Model

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pretraining foundation models for respiratory audio analysis is hindered by limited scale and low diversity of available respiratory sound data. Method: This paper systematically investigates cross-domain transfer learning and joint fine-tuning paradigms. We find that general-purpose audio models pretrained on AudioSet significantly outperform respiratory-sound-specific pretrained models. To mitigate spectral distortion, we propose a frequency-domain-structure-preserving feature aggregation mechanism. We conduct unified evaluation of mainstream models—including AST and PaSST—on the OPERA benchmark, incorporating multi-dataset joint training and spectrogram alignment strategies. Contribution/Results: Our approach achieves new state-of-the-art performance on OPERA. The code is fully open-sourced, establishing the first reusable, robust foundational representation model for intelligent respiratory sound analysis.

Technology Category

Application Category

📝 Abstract
Recent advancements in foundation models have sparked interest in respiratory audio foundation models. However, the effectiveness of applying conventional pre-training schemes to datasets that are small-sized and lack diversity has not been sufficiently verified. This study aims to explore better pre-training practices for respiratory sounds by comparing numerous pre-trained audio models. Our investigation reveals that models pre-trained on AudioSet, a general audio dataset, are more effective than the models specifically pre-trained on respiratory sounds. Moreover, combining AudioSet and respiratory sound datasets for further pre-training enhances performance, and preserving the frequency-wise information when aggregating features is vital. Along with more insights found in the experiments, we establish a new state-of-the-art for the OPERA benchmark, contributing to advancing respiratory audio foundation models. Our code is available online at https://github.com/nttcslab/eval-audio-repr/tree/main/plugin/OPERA.
Problem

Research questions and friction points this paper is trying to address.

Exploring better pre-training for small, non-diverse respiratory audio datasets
Comparing effectiveness of general vs. respiratory-specific pre-trained audio models
Optimizing feature aggregation and hybrid pre-training for respiratory sound analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-train on AudioSet for better respiratory sound
Combine AudioSet and respiratory datasets for pre-training
Preserve frequency-wise information in feature aggregation
🔎 Similar Papers
No similar papers found.