Machine Learning-Based Automated Assessment of Intracorporeal Suturing in Laparoscopic Fundoplication

📅 2024-12-16
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Current laparoscopic suturing skill assessment relies heavily on manual annotation and lacks real-time feedback, hindering objective, scalable surgical training. Method: We propose the first fully automated, annotation-free assessment framework for intracorporeal suturing in laparoscopic Nissen fundoplication. Our approach introduces (i) an unsupervised dual-instrument tracking method leveraging the Segment Anything Model (SAM), and (ii) an end-to-end 1D-CNN model trained without handcrafted kinematic features or ground-truth trajectory labels; a supervised PCA–Random Forest baseline is included for comparison. Contribution/Results: Evaluated on real porcine intestinal model surgical videos, the unsupervised 1D-CNN achieves 0.817 accuracy and 0.806 F1-score—significantly outperforming the supervised baseline (0.795 accuracy, 0.778 F1). To our knowledge, this is the first framework enabling high-accuracy, annotation-free, and deployable hierarchical surgical skill assessment.

Technology Category

Application Category

📝 Abstract
Automated assessment of surgical skills using artificial intelligence (AI) provides trainees with instantaneous feedback. After bimanual tool motions are captured, derived kinematic metrics are reliable predictors of performance in laparoscopic tasks. Implementing automated tool tracking requires time-intensive human annotation. We developed AI-based tool tracking using the Segment Anything Model (SAM) to eliminate the need for human annotators. Here, we describe a study evaluating the usefulness of our tool tracking model in automated assessment during a laparoscopic suturing task in the fundoplication procedure. An automated tool tracking model was applied to recorded videos of Nissen fundoplication on porcine bowel. Surgeons were grouped as novices (PGY1-2) and experts (PGY3-5, attendings). The beginning and end of each suturing step were segmented, and motions of the left and right tools were extracted. A low-pass filter with a 24 Hz cut-off frequency removed noise. Performance was assessed using supervised and unsupervised models, and an ablation study compared results. Kinematic features--RMS velocity, RMS acceleration, RMS jerk, total path length, and Bimanual Dexterity--were extracted and analyzed using Logistic Regression, Random Forest, Support Vector Classifier, and XGBoost. PCA was performed for feature reduction. For unsupervised learning, a Denoising Autoencoder (DAE) model with classifiers, such as a 1-D CNN and traditional models, was trained. Data were extracted for 28 participants (9 novices, 19 experts). Supervised learning with PCA and Random Forest achieved an accuracy of 0.795 and an F1 score of 0.778. The unsupervised 1-D CNN achieved superior results with an accuracy of 0.817 and an F1 score of 0.806, eliminating the need for kinematic feature computation. We demonstrated an AI model capable of automated performance classification, independent of human annotation.
Problem

Research questions and friction points this paper is trying to address.

Automated assessment of laparoscopic suturing skills using AI
Eliminating human annotation in surgical tool tracking
Classifying surgeon performance with supervised and unsupervised models
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-based tool tracking using Segment Anything Model
Automated performance classification without human annotation
Unsupervised 1-D CNN for superior accuracy
🔎 Similar Papers
No similar papers found.
S
Shekhar Madhav Khairnar
Department of Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, USA
H
Huu Phong Nguyen
Department of Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, USA
A
Alexis Desir
Department of Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, USA
C
Carla Holcomb
Department of Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, USA
D
Daniel J. Scott
Department of Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, USA
Ganesh Sankaranarayanan
Ganesh Sankaranarayanan
Associate Professor, Department of Surgery, The University of Texas Southwestern Medical Center
HapticsTeleroboticsTelesurgerySurgical SimulationDeep Learning