๐ค AI Summary
This study addresses the challenge of subjective variability in endoscopic ultrasound (EUS) image segmentation for pancreatic cancer, where automated and objective tumor boundary delineation remains lacking. The authors propose a novel approach that, for the first time, employs a Vision Transformer as the backbone network within the USFM segmentation framework to perform automatic segmentation of pancreatic tumors on grayscale EUS images standardized to 512ร51โฏpixels. Evaluated via five-fold cross-validation and external testing, the model achieves Dice scores of 0.651 and 0.657, respectively, with specificity exceeding 97%, demonstrating strong generalizability and robustness across datasets. The analysis also highlights persistent challenges in multi-region missegmentation under data heterogeneity, offering critical insights for future methodological refinements.
๐ Abstract
Background: Pancreatic cancer is one of the most aggressive cancers, with poor survival rates. Endoscopic ultrasound (EUS) is a key diagnostic modality, but its effectiveness is constrained by operator subjectivity. This study evaluates a Vision Transformer-based deep learning segmentation model for pancreatic tumors. Methods: A segmentation model using the USFM framework with a Vision Transformer backbone was trained and validated with 17,367 EUS images (from two public datasets) in 5-fold cross-validation. The model was tested on an independent dataset of 350 EUS images from another public dataset, manually segmented by radiologists. Preprocessing included grayscale conversion, cropping, and resizing to 512x512 pixels. Metrics included Dice similarity coefficient (DSC), intersection over union (IoU), sensitivity, specificity, and accuracy. Results: In 5-fold cross-validation, the model achieved a mean DSC of 0.651 +/- 0.738, IoU of 0.579 +/- 0.658, sensitivity of 69.8%, specificity of 98.8%, and accuracy of 97.5%. For the external validation set, the model achieved a DSC of 0.657 (95% CI: 0.634-0.769), IoU of 0.614 (95% CI: 0.590-0.689), sensitivity of 71.8%, and specificity of 97.7%. Results were consistent, but 9.7% of cases exhibited erroneous multiple predictions. Conclusions: The Vision Transformer-based model demonstrated strong performance for pancreatic tumor segmentation in EUS images. However, dataset heterogeneity and limited external validation highlight the need for further refinement, standardization, and prospective studies.