🤖 AI Summary
This study addresses the challenges of automated screening for retinopathy of prematurity (ROP)—notably data scarcity, class imbalance, and limited model generalizability—which hinder simultaneous accurate identification of structural staging and microvascular abnormalities. The authors propose a context-aware asymmetric ensemble (CAA Ensemble) that integrates a multi-scale active query network (MS-AQNet) to localize fibrovascular ridges and a vascular topology graph-based gated multiple instance learning framework (VascuMIL) to detect vascular tortuosity. Clinical context is incorporated as a dynamic query vector to guide feature extraction. The model achieves interpretable “glass-box” decision-making through counterfactual attention heatmaps and vascular threat maps. Evaluated on an imbalanced cohort of 188 infants (6,004 images), it attains a Macro F1 score of 0.93 for broad ROP staging and an AUC of 0.996 for Plus Disease detection, setting a new state-of-the-art performance.
📝 Abstract
Retinopathy of Prematurity (ROP) is among the major causes of preventable childhood blindness. Automated screening remains challenging, primarily due to limited data availability and the complex condition involving both structural staging and microvascular abnormalities. Current deep learning models depend heavily on large private datasets and passive multimodal fusion, which commonly fail to generalize on small, imbalanced public cohorts. We thus propose the Context-Aware Asymmetric Ensemble Model (CAA Ensemble) that simulates clinical reasoning through two specialized streams. First, the Multi-Scale Active Query Network (MS-AQNet) serves as a structure specialist, utilizing clinical contexts as dynamic query vectors to spatially control visual feature extraction for localization of the fibrovascular ridge. Secondly, VascuMIL encodes Vascular Topology Maps (VMAP) within a gated Multiple Instance Learning (MIL) network to precisely identify vascular tortuosity. A synergistic meta-learner ensembles these orthogonal signals to resolve diagnostic discordance across multiple objectives. Tested on a highly imbalanced cohort of 188 infants (6,004 images), the framework attained State-of-the-Art performance on two distinct clinical tasks: achieving a Macro F1-Score of 0.93 for Broad ROP staging and an AUC of 0.996 for Plus Disease detection. Crucially, the system features `Glass Box'transparency through counterfactual attention heatmaps and vascular threat maps, proving that clinical metadata dictates the model's visual search. Additionally, this study demonstrates that architectural inductive bias can serve as an effective bridge for the medical AI data gap.