🤖 AI Summary
In endoscopic surveillance of rectal cancer, tumor recurrence (LR) detection is hindered by illumination variations, blood artifacts, motion blur, and post-therapeutic anatomical alterations, leading to poor generalizability of existing models. Method: We propose the first Swin Transformer–based robust assessment framework tailored for longitudinal colorectal cancer surveillance. It explicitly addresses both distribution shift (cross-center out-of-distribution, OOD data) and concept shift (therapy-induced morphological and chromatic changes) via optimal transport–simulated realistic color perturbations for rigorous robustness evaluation. Contribution/Results: Our model significantly outperforms ViT and CNN baselines. On private longitudinal follow-up data, it achieves an LR detection AUC of 0.84; on cross-center OOD data, 0.83. Under artificial color perturbations, it maintains stable performance with AUCs of 0.83 (follow-up) and 0.87 (OOD), demonstrating superior robustness and clinical applicability.
📝 Abstract
Endoscopic images are used at various stages of rectal cancer treatment starting from cancer screening, diagnosis, during treatment to assess response and toxicity from treatments such as colitis, and at follow up to detect new tumor or local regrowth (LR). However, subjective assessment is highly variable and can underestimate the degree of response in some patients, subjecting them to unnecessary surgery, or overestimate response that places patients at risk of disease spread. Advances in deep learning has shown the ability to produce consistent and objective response assessment for endoscopic images. However, methods for detecting cancers, regrowth, and monitoring response during the entire course of patient treatment and follow-up are lacking. This is because, automated diagnosis and rectal cancer response assessment requires methods that are robust to inherent imaging illumination variations and confounding conditions (blood, scope, blurring) present in endoscopy images as well as changes to the normal lumen and tumor during treatment. Hence, a hierarchical shifted window (Swin) transformer was trained to distinguish rectal cancer from normal lumen using endoscopy images. Swin as well as two convolutional (ResNet-50, WideResNet-50), and vision transformer (ViT) models were trained and evaluated on follow-up longitudinal images to detect LR on private dataset as well as on out-of-distribution (OOD) public colonoscopy datasets to detect pre/non-cancerous polyps. Color shifts were applied using optimal transport to simulate distribution shifts. Swin and ResNet models were similarly accurate in the in-distribution dataset. Swin was more accurate than other methods (follow-up: 0.84, OOD: 0.83) even when subject to color shifts (follow-up: 0.83, OOD: 0.87), indicating capability to provide robust performance for longitudinal cancer assessment.