🤖 AI Summary
To address three key bottlenecks in self-supervised learning (SSL) for 3D medical image segmentation—limited pretraining data scale, architectural mismatch with 3D convolutional networks, and insufficient evaluation—this paper introduces the first end-to-end SSL pretraining framework specifically designed for large-scale 3D brain MRI. Leveraging 39,000 multi-center brain MRI scans, we adapt the Masked Autoencoder (MAE) to 3D CNNs via a residual encoder U-Net architecture and deeply integrate nnU-Net’s preprocessing, augmentation, and inference pipelines. We conduct systematic cross-center evaluation across five development and eight test datasets, demonstrating substantially improved generalization robustness. Our method achieves an average Dice score gain of ~3.0% over state-of-the-art SSL approaches and the strong supervised baseline nnU-Net, establishing new SOTA in 3D medical image segmentation. The code and pretrained models are publicly available.
📝 Abstract
Self-Supervised Learning (SSL) presents an exciting opportunity to unlock the potential of vast, untapped clinical datasets, for various downstream applications that suffer from the scarcity of labeled data. While SSL has revolutionized fields like natural language processing and computer vision, its adoption in 3D medical image computing has been limited by three key pitfalls: Small pre-training dataset sizes, architectures inadequate for 3D medical image analysis, and insufficient evaluation practices. In this paper, we address these issues by i) leveraging a large-scale dataset of 39k 3D brain MRI volumes and ii) using a Residual Encoder U-Net architecture within the state-of-the-art nnU-Net framework. iii) A robust development framework, incorporating 5 development and 8 testing brain MRI segmentation datasets, allowed performance-driven design decisions to optimize the simple concept of Masked Auto Encoders (MAEs) for 3D CNNs. The resulting model not only surpasses previous SSL methods but also outperforms the strong nnU-Net baseline by an average of approximately 3 Dice points setting a new state-of-the-art. Our code and models are made available here.