🤖 AI Summary
Traditional inhalation injury grading systems (e.g., AIS) rely on subjective bronchoscopic assessment, exhibiting low reliability, validity, and weak prognostic correlation. To address this, we propose an end-to-end deep learning framework that enables objective, bronchoscopy image–driven grading, using duration of mechanical ventilation as the clinical gold standard. Methodologically, we introduce an enhanced StarGAN architecture integrating Patch Loss and SSIM Loss to generate high-fidelity, anatomically accurate, and clinically interpretable synthetic bronchoscopic images—marking the first such application in this domain and substantially mitigating small-sample generalization challenges. Coupled with a Swin Transformer–based classifier and Fréchet Inception Distance (FID)–based quantitative evaluation, our model achieves 77.78% classification accuracy (a 11.11% improvement) and sets a new state-of-the-art FID of 30.06. Blind expert review by burn surgeons confirms faithful preservation of bronchial anatomy and color characteristics in generated images.
📝 Abstract
Inhalation injuries face a challenge in clinical diagnosis and grading due to the limitations of traditional methods, such as Abbreviated Injury Score (AIS), which rely on subjective assessments and show weak correlations with clinical outcomes. This study introduces a novel deep learning-based framework for grading inhalation injuries using bronchoscopy images with the duration of mechanical ventilation as an objective metric. To address the scarcity of medical imaging data, we propose enhanced StarGAN, a generative model that integrates Patch Loss and SSIM Loss to improve synthetic images' quality and clinical relevance. The augmented dataset generated by enhanced StarGAN significantly improved classification performance when evaluated using the Swin Transformer, achieving an accuracy of 77.78%, an 11.11% improvement over the original dataset. Image quality was assessed using the Fr'echet Inception Distance (FID), where Enhanced StarGAN achieved the lowest FID of 30.06, outperforming baseline models. Burn surgeons confirmed the realism and clinical relevance of the generated images, particularly the preservation of bronchial structures and color distribution. These results highlight the potential of enhanced StarGAN in addressing data limitations and improving classification accuracy for inhalation injury grading.