🤖 AI Summary
Pulmonary B-mode ultrasound images suffer from severe air-induced artifacts and rely on subjective visual interpretation, hindering objective, quantitative assessment of lung ventilation. To address this, we propose a novel end-to-end paradigm that reconstructs quantitative ventilation maps directly from raw radio-frequency (RF) data—bypassing conventional beamforming and image-based interpretation. Our approach innovatively integrates physics-informed Fourier neural operators with acoustic RF simulation, enabling synergistic training using both simulated data and limited real-world measurements. We further design a multi-scale feature fusion architecture and a transfer learning strategy comprising simulation-based pretraining followed by fine-tuning on ex vivo porcine lung data. In ex vivo experiments, our method achieves a ventilation estimation error of only 9%, substantially outperforming semi-quantitative scoring methods. It offers reader-independent, fully quantitative, and highly reproducible assessment—demonstrating strong translational potential for clinical pulmonary ultrasound.
📝 Abstract
Lung ultrasound is a growing modality in clinics for diagnosing and monitoring acute and chronic lung diseases due to its low cost and accessibility. Lung ultrasound works by emitting diagnostic pulses, receiving pressure waves and converting them into radio frequency (RF) data, which are then processed into B-mode images with beamformers for radiologists to interpret. However, unlike conventional ultrasound for soft tissue anatomical imaging, lung ultrasound interpretation is complicated by complex reverberations from the pleural interface caused by the inability of ultrasound to penetrate air. The indirect B-mode images make interpretation highly dependent on reader expertise, requiring years of training, which limits its widespread use despite its potential for high accuracy in skilled hands. To address these challenges and democratize ultrasound lung imaging as a reliable diagnostic tool, we propose LUNA, an AI model that directly reconstructs lung aeration maps from RF data, bypassing the need for traditional beamformers and indirect interpretation of B-mode images. LUNA uses a Fourier neural operator, which processes RF data efficiently in Fourier space, enabling accurate reconstruction of lung aeration maps. LUNA offers a quantitative, reader-independent alternative to traditional semi-quantitative lung ultrasound scoring methods. The development of LUNA involves synthetic and real data: We simulate synthetic data with an experimentally validated approach and scan ex vivo swine lungs as real data. Trained on abundant simulated data and fine-tuned with a small amount of real-world data, LUNA achieves robust performance, demonstrated by an aeration estimation error of 9% in ex-vivo lung scans. We demonstrate the potential of reconstructing lung aeration maps from RF data, providing a foundation for improving lung ultrasound reproducibility and diagnostic utility.