🤖 AI Summary
To address the poor generalizability and low robustness of lesion segmentation in heterogeneous brain lesions from multimodal MRI, this paper proposes a unified adaptive framework. It integrates multi-stream CNNs with a Swin Transformer, incorporates a lesion-aware hierarchical gating mechanism and dynamic cross-modal attention fusion (CMAF), and further enhances training stability via pathology-specific data augmentation and difficulty-aware sampling to suppress optimization variance. To our knowledge, this is the first single-model approach achieving state-of-the-art performance across three distinct brain lesion segmentation tasks: white matter hyperintensities (WMH) with a Dice similarity coefficient (DSC) of 0.831; ischemic stroke (ISLES 2022) with a 95th-percentile Hausdorff distance (HD95) of 9.69; and glioma (BraTS 2020) tumor core segmentation with a DSC of 0.8651. The framework significantly improves cross-pathology generalization and clinical reliability.
📝 Abstract
Automated segmentation of heterogeneous brain lesions from multi-modal MRI remains a critical challenge in clinical neuroimaging. Current deep learning models are typically specialized `point solutions' that lack generalization and high performance variance, limiting their clinical reliability. To address these gaps, we propose the Unified Multi-Stream SYNAPSE-Net, an adaptive framework designed for both generalization and robustness. The framework is built on a novel hybrid architecture integrating multi-stream CNN encoders, a Swin Transformer bottleneck for global context, a dynamic cross-modal attention fusion (CMAF) mechanism, and a hierarchical gated decoder for high-fidelity mask reconstruction. The architecture is trained with a variance reduction strategy that combines pathology specific data augmentation and difficulty-aware sampling method. The model was evaluated on three different challenging public datasets: the MICCAI 2017 WMH Challenge, the ISLES 2022 Challenge, and the BraTS 2020 Challenge. Our framework attained a state-of-the-art DSC value of 0.831 with the HD95 value of 3.03 in the WMH dataset. For ISLES 2022, it achieved the best boundary accuracy with a statistically significant difference (HD95 value of 9.69). For BraTS 2020, it reached the highest DSC value for the tumor core region (0.8651). These experimental findings suggest that our unified adaptive framework achieves state-of-the-art performance across multiple brain pathologies, providing a robust and clinically feasible solution for automated segmentation. The source code and the pre-trained models are available at https://github.com/mubid-01/SYNAPSE-Net-pre.