Automated Cervical Os Segmentation for Camera-Guided, Speculum-Free Screening

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-time automatic segmentation of the cervical os in speculum-free cervical cancer screening remains challenging due to limited visual cues and anatomical variability in vaginal endoscopic video. Method: We propose a lightweight segmentation framework based on an endoscopy-video-pretrained vision transformer (EndoViT/DPT), fine-tuned on speculum-free vaginal endoscopy images. Five encoder-decoder architectures were systematically compared using ten-fold cross-validation and external validation on phantom data, with performance evaluated via IoU, Dice coefficient, detection rate, and localization error. Contribution/Results: EndoViT/DPT achieves the first robust, real-time cervical os segmentation in this setting—attaining a Dice score of 0.50±0.31, detection rate of 87%±33%, and inference speed of 21.5 FPS—significantly outperforming existing methods. The approach enables reliable, visualization-guided sampling by non-expert operators across low- and high-resource settings.

Technology Category

Application Category

📝 Abstract
Cervical cancer is highly preventable, yet persistent barriers to screening limit progress toward elimination goals. Speculum-free devices that integrate imaging and sampling could improve access, particularly in low-resource settings, but require reliable visual guidance. This study evaluates deep learning methods for real-time segmentation of the cervical os in transvaginal endoscopic images. Five encoder-decoder architectures were compared using 913 frames from 200 cases in the IARC Cervical Image Dataset, annotated by gynaecologists. Performance was assessed using IoU, DICE, detection rate, and distance metrics with ten-fold cross-validation. EndoViT/DPT, a vision transformer pre-trained on surgical video, achieved the highest DICE (0.50 pm 0.31) and detection rate (0.87 pm 0.33), outperforming CNN-based approaches. External validation with phantom data demonstrated robust segmentation under variable conditions at 21.5 FPS, supporting real-time feasibility. These results establish a foundation for integrating automated os recognition into speculum-free cervical screening devices to support non-expert use in both high- and low-resource contexts.
Problem

Research questions and friction points this paper is trying to address.

Automated cervical os segmentation for speculum-free screening
Real-time visual guidance in transvaginal endoscopic images
Integration into non-expert cervical screening devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning for cervical os segmentation
Vision transformer outperforms CNN methods
Real-time segmentation supports non-expert screening
🔎 Similar Papers
No similar papers found.
A
Aoife McDonald-Bowyer
The UCL Hawkes Institute, University College London, London, UK
A
Anjana Wijekoon
The UCL Hawkes Institute, University College London, London, UK
R
Ryan Laurance Love
Institute of Reproductive and Developmental Biology, Imperial College London, London, UK
K
Katie Allan
Queen Charlotte’s and Chelsea Hospital, Imperial College Healthcare NHS Trust, London, UK
S
Scott Colvin
Queen Charlotte’s and Chelsea Hospital, Imperial College Healthcare NHS Trust, London, UK
A
Aleksandra Gentry-Maharaj
Department of Women’s Cancer, EGA Institute for Women’s Health, University College London, London, UK
A
Adeola Olaitan
Department of Women’s Cancer, EGA Institute for Women’s Health, University College London, London, UK
Danail Stoyanov
Danail Stoyanov
Professor of Robot Vision, University College London
Surgical VisionSurgical AISurgical RoboticsComputer Assisted InterventionsSurgical Data Science
Agostino Stilli
Agostino Stilli
Associate Professor, University College London
Soft RoboticsSurgical RoboticsRehabilitation RoboticsHealthcare RoboticsHuman-Robot
Sophia Bano
Sophia Bano
Assistant Professor in Robotics and AI, University College London
Computer VisionSurgical Data ScienceSurgical RoboticsComputer-assisted InterventionMedical Imaging