Style Content Decomposition-based Data Augmentation for Domain Generalizable Medical Image Segmentation

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical image segmentation models often suffer significant performance degradation under domain shifts, particularly due to coupled style (appearance) and content (anatomical structure) discrepancies between training and test domains—where content shift has been historically overlooked. To address this, we propose a parameter-free, plug-and-play style-content disentangled data augmentation method. Our approach is the first to explicitly model and quantify anatomical structural variability in medical images, jointly augmenting both style and content within a rank-one latent space. By leveraging low-rank representation, latent-space disentanglement, and cross-domain style/content recombination, it enables efficient and realistic synthetic image generation. Crucially, it requires no architectural modifications or additional learnable parameters. Extensive experiments demonstrate substantial improvements in segmentation robustness across challenging domain-shift scenarios—including cross-sequence, cross-center, and cross-modality settings—consistently outperforming state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Due to the domain shifts between training and testing medical images, learned segmentation models often experience significant performance degradation during deployment. In this paper, we first decompose an image into its style code and content map and reveal that domain shifts in medical images involve: extbf{style shifts} (emph{i.e.}, differences in image appearance) and extbf{content shifts} (emph{i.e.}, variations in anatomical structures), the latter of which has been largely overlooked. To this end, we propose extbf{StyCona}, a extbf{sty}le extbf{con}tent decomposition-based data extbf{a}ugmentation method that innovatively augments both image style and content within the rank-one space, for domain generalizable medical image segmentation. StyCona is a simple yet effective plug-and-play module that substantially improves model generalization without requiring additional training parameters or modifications to the segmentation model architecture. Experiments on cross-sequence, cross-center, and cross-modality medical image segmentation settings with increasingly severe domain shifts, demonstrate the effectiveness of StyCona and its superiority over state-of-the-arts. The code is available at https://github.com/Senyh/StyCona.
Problem

Research questions and friction points this paper is trying to address.

Addresses domain shifts in medical image segmentation
Decomposes images into style and content for augmentation
Improves model generalization without architectural changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes images into style and content
Augments both style and content in rank-one space
Improves model generalization without extra parameters
🔎 Similar Papers
No similar papers found.
Z
Zhiqiang Shen
School of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
P
Peng Cao
School of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
J
Jinzhu Yang
School of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
O
Osmar R. Zaiane
Alberta Machine Intelligence Institute, University of Alberta, Edmonton, Alberta, Canada
Zhaolin Chen
Zhaolin Chen
Associate Professor in Medical Imaging, Monash University
Magnetic Resonance ImagingPositron Emission TomographyPET/MRUltra low field MRI