Ultrasound Lung Aeration Map via Physics-Aware Neural Operators

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pulmonary B-mode ultrasound images suffer from severe air-induced artifacts and rely on subjective visual interpretation, hindering objective, quantitative assessment of lung ventilation. To address this, we propose a novel end-to-end paradigm that reconstructs quantitative ventilation maps directly from raw radio-frequency (RF) data—bypassing conventional beamforming and image-based interpretation. Our approach innovatively integrates physics-informed Fourier neural operators with acoustic RF simulation, enabling synergistic training using both simulated data and limited real-world measurements. We further design a multi-scale feature fusion architecture and a transfer learning strategy comprising simulation-based pretraining followed by fine-tuning on ex vivo porcine lung data. In ex vivo experiments, our method achieves a ventilation estimation error of only 9%, substantially outperforming semi-quantitative scoring methods. It offers reader-independent, fully quantitative, and highly reproducible assessment—demonstrating strong translational potential for clinical pulmonary ultrasound.

Technology Category

Application Category

📝 Abstract
Lung ultrasound is a growing modality in clinics for diagnosing and monitoring acute and chronic lung diseases due to its low cost and accessibility. Lung ultrasound works by emitting diagnostic pulses, receiving pressure waves and converting them into radio frequency (RF) data, which are then processed into B-mode images with beamformers for radiologists to interpret. However, unlike conventional ultrasound for soft tissue anatomical imaging, lung ultrasound interpretation is complicated by complex reverberations from the pleural interface caused by the inability of ultrasound to penetrate air. The indirect B-mode images make interpretation highly dependent on reader expertise, requiring years of training, which limits its widespread use despite its potential for high accuracy in skilled hands. To address these challenges and democratize ultrasound lung imaging as a reliable diagnostic tool, we propose LUNA, an AI model that directly reconstructs lung aeration maps from RF data, bypassing the need for traditional beamformers and indirect interpretation of B-mode images. LUNA uses a Fourier neural operator, which processes RF data efficiently in Fourier space, enabling accurate reconstruction of lung aeration maps. LUNA offers a quantitative, reader-independent alternative to traditional semi-quantitative lung ultrasound scoring methods. The development of LUNA involves synthetic and real data: We simulate synthetic data with an experimentally validated approach and scan ex vivo swine lungs as real data. Trained on abundant simulated data and fine-tuned with a small amount of real-world data, LUNA achieves robust performance, demonstrated by an aeration estimation error of 9% in ex-vivo lung scans. We demonstrate the potential of reconstructing lung aeration maps from RF data, providing a foundation for improving lung ultrasound reproducibility and diagnostic utility.
Problem

Research questions and friction points this paper is trying to address.

Lung Ultrasound Complexity
Medical Diagnosis Limitations
Gas Exchange Assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

LUNA smart model
Pulmonary ultrasound enhancement
Ventilation assessment accuracy
🔎 Similar Papers
No similar papers found.
J
Jiayun Wang
Department of Computing and Mathematical Sciences, California Institute of Technology, 1200 E California Blvd, Pasadena, 91125, CA, United States
Oleksii Ostras
Oleksii Ostras
Postdoc, University of North Carolina at Chapel Hill
Biomedical engineering
Masashi Sode
Masashi Sode
PhD student at University of North Carolina at Chapel Hill
Deep learningUltrasound imagingBiomedical Engineering
Bahareh Tolooshams
Bahareh Tolooshams
Assistant Professor, Amii, University of Alberta
Representation LearningInverse ProblemsGenerative ModelsInterpretabilityNeuroAI
Z
Zong-Yi Li
Department of Computing and Mathematical Sciences, California Institute of Technology, 1200 E California Blvd, Pasadena, 91125, CA, United States
K
K. Azizzadenesheli
NVIDIA, 2788 San Tomas Express Way, Santa Clara, 95051, CA, United States
G
G. Pinton
Department of Biomedical Engineering, University of North Carolina at Chapel Hill, 103 South Building, Chapel Hill, 27514, NC, United States; Department of Biomedical Engineering, North Carolina State University, Campus Box 7625, Raleigh, 27695, NC, United States
A
A. Anandkumar
Department of Computing and Mathematical Sciences, California Institute of Technology, 1200 E California Blvd, Pasadena, 91125, CA, United States