Towards Objective Obstetric Ultrasound Assessment: Contrastive Representation Learning for Fetal Movement Detection

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional fetal movement (FM) assessment suffers from subjectivity and low accuracy. To address this, we propose an automated FM recognition framework based on self-supervised contrastive representation learning. Our method introduces a dual-contrastive loss that jointly models spatial and temporal features, augmented by a task-specific sampling strategy to enable robust representation learning from variable-length ultrasound videos; discriminative performance is further enhanced via probabilistic fine-tuning. Evaluated on a clinical dataset comprising 92 thirty-minute ultrasound video recordings, the model achieves 78.01% sensitivity and 81.60% AUROC—significantly outperforming existing approaches. This work represents the first application of spatiotemporal dual-contrastive learning to fetal motion analysis, establishing a novel, label-efficient paradigm for objective and interpretable FM assessment in unlabeled ultrasound videos, with clear translational potential for clinical practice.

Technology Category

Application Category

📝 Abstract
Accurate fetal movement (FM) detection is essential for assessing prenatal health, as abnormal movement patterns can indicate underlying complications such as placental dysfunction or fetal distress. Traditional methods, including maternal perception and cardiotocography (CTG), suffer from subjectivity and limited accuracy. To address these challenges, we propose Contrastive Ultrasound Video Representation Learning (CURL), a novel self-supervised learning framework for FM detection from extended fetal ultrasound video recordings. Our approach leverages a dual-contrastive loss, incorporating both spatial and temporal contrastive learning, to learn robust motion representations. Additionally, we introduce a task-specific sampling strategy, ensuring the effective separation of movement and non-movement segments during self-supervised training, while enabling flexible inference on arbitrarily long ultrasound recordings through a probabilistic fine-tuning approach. Evaluated on an in-house dataset of 92 subjects, each with 30-minute ultrasound sessions, CURL achieves a sensitivity of 78.01% and an AUROC of 81.60%, demonstrating its potential for reliable and objective FM analysis. These results highlight the potential of self-supervised contrastive learning for fetal movement analysis, paving the way for improved prenatal monitoring and clinical decision-making.
Problem

Research questions and friction points this paper is trying to address.

Detecting fetal movement objectively from ultrasound videos
Addressing subjectivity in traditional obstetric assessment methods
Learning robust motion representations through contrastive learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised contrastive learning for fetal movement detection
Dual-contrastive loss combining spatial and temporal learning
Task-specific sampling strategy for movement segment separation
🔎 Similar Papers
No similar papers found.
T
Talha Ilyas
Department of Electrical and Computer Systems Engineering, Monash University, Clayton, VIC 3168, Australia
D
Duong Nhu
AIM for Health Lab, Faculty of Information Technology, Monash University, Clayton, VIC 3168, Australia
A
Allison Thomas
Department of Obstetrics & Gynaecology, Monash University, Clayton, VIC 3168, Australia
A
Arie Levin
Department of Chemical & Biological Engineering, Monash University, Clayton, VIC 3168, Australia
L
Lim Wei Yap
School of Biomedical Engineering, University of Sydney, Darlington, NSW 2008, Australia
S
Shu Gong
Department of Obstetrics & Gynaecology, Monash University, Clayton, VIC 3168, Australia
D
David Vera Anaya
Department of Obstetrics & Gynaecology, Monash University, Clayton, VIC 3168, Australia
Y
Yiwen Jiang
AIM for Health Lab, Faculty of Information Technology, Monash University, Clayton, VIC 3168, Australia
Deval Mehta
Deval Mehta
Founding Member & Research Fellow at AIM for Health Lab | Monash University
Multi-modal AI for HealthcareFoundation Models / LLMsHealth Equity and Responsible AI
R
Ritesh Warty
AIM for Health Lab, Faculty of Information Technology, Monash University, Clayton, VIC 3168, Australia
V
Vinayak Smith
Department of Obstetrics & Gynaecology, Monash University, Clayton, VIC 3168, Australia
M
Maya Reddy
Department of Obstetrics & Gynaecology, Monash University, Clayton, VIC 3168, Australia
Euan Wallace
Euan Wallace
Monash University
perinatal medicinestem cellspatient safety
W
Wenlong Cheng
Department of Chemical & Biological Engineering, Monash University, Clayton, VIC 3168, Australia
Z
Zongyuan Ge
AIM for Health Lab, Faculty of Information Technology, Monash University, Clayton, VIC 3168, Australia
Faezeh Marzbanrad
Faezeh Marzbanrad
Monash University