Bridging the Inter-Domain Gap through Low-Level Features for Cross-Modal Medical Image Segmentation

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses inter-domain discrepancy in unsupervised domain adaptation (UDA) for cross-modal medical image segmentation. To this end, we propose LowBridge—a lightweight, model-agnostic, plug-and-play two-stage style transfer framework that systematically exploits the cross-modal invariance of low-level geometric features (e.g., edges) shared between source and target domains. The method comprises three components: (i) decoupled source-domain training, (ii) edge-driven generative reconstruction on the target domain using a U-Net–GAN hybrid architecture, and (iii) edge-guided segmentation inference. Evaluated on multiple public benchmarks, LowBridge consistently outperforms 11 state-of-the-art UDA methods, achieving new SOTA performance. Ablation studies confirm its strong compatibility and generalizability across diverse generative backbones (e.g., U-Net, GAN variants) and segmentation architectures (e.g., nnU-Net, TransUNet).

Technology Category

Application Category

📝 Abstract
This paper addresses the task of cross-modal medical image segmentation by exploring unsupervised domain adaptation (UDA) approaches. We propose a model-agnostic UDA framework, LowBridge, which builds on a simple observation that cross-modal images share some similar low-level features (e.g., edges) as they are depicting the same structures. Specifically, we first train a generative model to recover the source images from their edge features, followed by training a segmentation model on the generated source images, separately. At test time, edge features from the target images are input to the pretrained generative model to generate source-style target domain images, which are then segmented using the pretrained segmentation network. Despite its simplicity, extensive experiments on various publicly available datasets demonstrate that proposed achieves state-of-the-art performance, outperforming eleven existing UDA approaches under different settings. Notably, further ablation studies show that proposed is agnostic to different types of generative and segmentation models, suggesting its potential to be seamlessly plugged with the most advanced models to achieve even more outstanding results in the future. The code is available at https://github.com/JoshuaLPF/LowBridge.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised domain adaptation for cross-modal medical image segmentation
Leveraging shared low-level features to bridge inter-domain gaps
Model-agnostic framework for generating and segmenting source-style target images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses edge features for cross-modal adaptation
Generates source-style images via pretrained model
Model-agnostic framework with modular components
🔎 Similar Papers
No similar papers found.
Pengfei Lyu
Pengfei Lyu
Ph.D student at Northeastern University
Machine LearningComputer visionMulti-modal image processing
P
Pak-Hei Yeung
Nanyang Technological University, Singapore
X
Xiaosheng Yu
Northeastern University, Shenyang, China
J
Jing Xia
Nanyang Technological University, Singapore
J
Jianning Chi
Northeastern University, Shenyang, China
C
Chengdong Wu
Northeastern University, Shenyang, China
J
Jagath C. Rajapakse
Nanyang Technological University, Singapore