Semantic-aware Random Convolution and Source Matching for Domain Generalization in Medical Image Segmentation

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses single-source domain generalization (SSDG) for medical image segmentation—i.e., training a model on a single source domain (e.g., CT) and achieving robust cross-modal (e.g., to MR), multi-center, and multi-cardiac-phase segmentation without access to target-domain data or fine-tuning. We propose Semantic-Aware Random Convolution (SARConv), which applies anatomy-guided, label-aware augmentation to source images, and a source-domain matching intensity mapping strategy that adaptively calibrates target-domain intensity distributions during inference. Integrated into mainstream segmentation architectures, these components jointly mitigate semantic and distributional shifts between domains. Evaluated on multiple cross-modal benchmarks, our method achieves state-of-the-art performance, with segmentation accuracy in certain scenarios approaching that of in-domain supervised baselines—establishing a new benchmark for SSDG in medical image segmentation.

Technology Category

Application Category

📝 Abstract
We tackle the challenging problem of single-source domain generalization (DG) for medical image segmentation. To this end, we aim for training a network on one domain (e.g., CT) and directly apply it to a different domain (e.g., MR) without adapting the model and without requiring images or annotations from the new domain during training. We propose a novel method for promoting DG when training deep segmentation networks, which we call SRCSM. During training, our method diversifies the source domain through semantic-aware random convolution, where different regions of a source image are augmented differently, based on their annotation labels. At test-time, we complement the randomization of the training domain via mapping the intensity of target domain images, making them similar to source domain data. We perform a comprehensive evaluation on a variety of cross-modality and cross-center generalization settings for abdominal, whole-heart and prostate segmentation, where we outperform previous DG techniques in a vast majority of experiments. Additionally, we also investigate our method when training on whole-heart CT or MR data and testing on the diastolic and systolic phase of cine MR data captured with different scanner hardware, where we make a step towards closing the domain gap in this even more challenging setting. Overall, our evaluation shows that SRCSM can be considered a new state-of-the-art in DG for medical image segmentation and, moreover, even achieves a segmentation performance that matches the performance of the in-domain baseline in several settings.
Problem

Research questions and friction points this paper is trying to address.

Single-source domain generalization for medical image segmentation
Training on one domain and applying to another without adaptation
Closing domain gaps across modalities and scanner hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic-aware random convolution diversifies source domain training
Intensity mapping aligns target domain images with source data
SRCSM outperforms prior domain generalization techniques in segmentation
🔎 Similar Papers
No similar papers found.