🤖 AI Summary
Medical image segmentation models often suffer significant performance degradation under domain shifts, particularly due to coupled style (appearance) and content (anatomical structure) discrepancies between training and test domains—where content shift has been historically overlooked. To address this, we propose a parameter-free, plug-and-play style-content disentangled data augmentation method. Our approach is the first to explicitly model and quantify anatomical structural variability in medical images, jointly augmenting both style and content within a rank-one latent space. By leveraging low-rank representation, latent-space disentanglement, and cross-domain style/content recombination, it enables efficient and realistic synthetic image generation. Crucially, it requires no architectural modifications or additional learnable parameters. Extensive experiments demonstrate substantial improvements in segmentation robustness across challenging domain-shift scenarios—including cross-sequence, cross-center, and cross-modality settings—consistently outperforming state-of-the-art methods.
📝 Abstract
Due to the domain shifts between training and testing medical images, learned segmentation models often experience significant performance degradation during deployment. In this paper, we first decompose an image into its style code and content map and reveal that domain shifts in medical images involve: extbf{style shifts} (emph{i.e.}, differences in image appearance) and extbf{content shifts} (emph{i.e.}, variations in anatomical structures), the latter of which has been largely overlooked. To this end, we propose extbf{StyCona}, a extbf{sty}le extbf{con}tent decomposition-based data extbf{a}ugmentation method that innovatively augments both image style and content within the rank-one space, for domain generalizable medical image segmentation. StyCona is a simple yet effective plug-and-play module that substantially improves model generalization without requiring additional training parameters or modifications to the segmentation model architecture. Experiments on cross-sequence, cross-center, and cross-modality medical image segmentation settings with increasingly severe domain shifts, demonstrate the effectiveness of StyCona and its superiority over state-of-the-arts. The code is available at https://github.com/Senyh/StyCona.