Contrast-Invariant Self-supervised Segmentation for Quantitative Placental MRI

๐Ÿ“… 2025-05-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses three key challenges in T2*-weighted multi-echo placental MRI segmentation: weak inter-echo boundary contrast, absence of full-time-point ground-truth annotations, and motion artifact interference. To this end, we propose the first contrast-invariant representation learning framework specifically designed for multi-echo T2* data. Our method integrates masked autoencoder pretraining, pseudo-label-based unsupervised domain adaptation, global-local feature co-alignment, and a semantic matching lossโ€”explicitly enforcing intra-subject representation consistency across echoes. Evaluated on real clinical data, our approach significantly outperforms single-echo baselines and naive multi-echo fusion strategies, demonstrating strong cross-echo generalization. It establishes a robust, reproducible segmentation foundation for quantitative placental T2* mapping, enabling more reliable downstream biomarker analysis.

Technology Category

Application Category

๐Ÿ“ Abstract
Accurate placental segmentation is essential for quantitative analysis of the placenta. However, this task is particularly challenging in T2*-weighted placental imaging due to: (1) weak and inconsistent boundary contrast across individual echoes; (2) the absence of manual ground truth annotations for all echo times; and (3) motion artifacts across echoes caused by fetal and maternal movement. In this work, we propose a contrast-augmented segmentation framework that leverages complementary information across multi-echo T2*-weighted MRI to learn robust, contrast-invariant representations. Our method integrates: (i) masked autoencoding (MAE) for self-supervised pretraining on unlabeled multi-echo slices; (ii) masked pseudo-labeling (MPL) for unsupervised domain adaptation across echo times; and (iii) global-local collaboration to align fine-grained features with global anatomical context. We further introduce a semantic matching loss to encourage representation consistency across echoes of the same subject. Experiments on a clinical multi-echo placental MRI dataset demonstrate that our approach generalizes effectively across echo times and outperforms both single-echo and naive fusion baselines. To our knowledge, this is the first work to systematically exploit multi-echo T2*-weighted MRI for placental segmentation.
Problem

Research questions and friction points this paper is trying to address.

Weak boundary contrast in T2*-weighted placental MRI
Lack of manual annotations for all echo times
Motion artifacts from fetal and maternal movement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrast-augmented segmentation for multi-echo MRI
Masked autoencoding and pseudo-labeling techniques
Global-local collaboration for feature alignment
๐Ÿ”Ž Similar Papers
No similar papers found.
Xinliu Zhong
Xinliu Zhong
PhD student in Computer Science and Informatics, Emory University
Artificial Intelligence
R
Ruiying Liu
Department of Biomedical Informatics, Emory University
E
Emily S. Nichols
Department of Pediatrics, Western University
Xuzhe Zhang
Xuzhe Zhang
PhD Student, Columbia University
computer visiondeep learningmedical image analysisAI for HealthcareMLLM
Andrew F. Laine
Andrew F. Laine
Columbia University
biomedical imagingimage analysisdeep learningbiomedical informaticsmachine learning
E
Emma G. Duerden
Department of Pediatrics, Western University
Y
Yun Wang
Department of Biomedical Informatics, Emory University