Medical Semantic Segmentation with Diffusion Pretrain

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak feature generalizability and low spatial localization accuracy in 3D medical image semantic segmentation, this paper proposes an anatomy-guided diffusion pretraining paradigm. Methodologically, it introduces (1) a 3D universal anatomical coordinate prediction network that explicitly encodes anatomical priors as a coordinate regression task, and (2) a multi-task joint pretraining framework that simultaneously optimizes diffusion-based reconstruction and anatomical coordinate prediction to enhance feature robustness and spatial awareness. Evaluated on a 13-organ segmentation benchmark, the method achieves a Dice score of 67.8%, outperforming state-of-the-art reconstruction-based pretraining approaches by 7.5 percentage points and matching the performance of advanced contrastive learning methods. This work represents the first successful integration of anatomical structure guidance into 3D medical diffusion pretraining.

Technology Category

Application Category

📝 Abstract
Recent advances in deep learning have shown that learning robust feature representations is critical for the success of many computer vision tasks, including medical image segmentation. In particular, both transformer and convolutional-based architectures have benefit from leveraging pretext tasks for pretraining. However, the adoption of pretext tasks in 3D medical imaging has been less explored and remains a challenge, especially in the context of learning generalizable feature representations. We propose a novel pretraining strategy using diffusion models with anatomical guidance, tailored to the intricacies of 3D medical image data. We introduce an auxiliary diffusion process to pretrain a model that produce generalizable feature representations, useful for a variety of downstream segmentation tasks. We employ an additional model that predicts 3D universal body-part coordinates, providing guidance during the diffusion process and improving spatial awareness in generated representations. This approach not only aids in resolving localization inaccuracies but also enriches the model's ability to understand complex anatomical structures. Empirical validation on a 13-class organ segmentation task demonstrate the effectiveness of our pretraining technique. It surpasses existing restorative pretraining methods in 3D medical image segmentation by $7.5%$, and is competitive with the state-of-the-art contrastive pretraining approach, achieving an average Dice coefficient of 67.8 in a non-linear evaluation scenario.
Problem

Research questions and friction points this paper is trying to address.

3D medical image
semantic segmentation
accuracy improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-trained Diffusion Model
3D Medical Image Segmentation
Spatial Position Prediction
🔎 Similar Papers
No similar papers found.
D
David Li
IRA Labs Ltd
Anvar Kurmukov
Anvar Kurmukov
AUMI.AI
Medical ImagingMachine LearningNeuroimaging
M
M. Goncharov
IRA Labs Ltd
R
Roman Sokolov
IRA Labs Ltd
M
Mikhail Belyaev
IRA Labs Ltd