🤖 AI Summary
Colonoscopy polyp miss-rate contributes significantly to colorectal cancer risk. To address the scarcity of high-quality, open-source data for AI-driven polyp detection and segmentation, this work introduces the first publicly available, multicenter, multimodal colonoscopy polyp image dataset—comprising 3,934 images with both pixel-level (segmentation) and bounding-box (detection) annotations. It spans three clinical centers in Norway, Sweden, and Vietnam, and covers five imaging modalities: BLI, FICE, LCI, NBI, and WLI. We establish the first unified, cross-site, cross-device, and cross-modality annotation protocol and propose a three-dimensional evaluation framework—across modality, clinical center, and federated learning settings. The dataset is openly released on OSF and GitHub, accompanied by standardized train/val/test splits, benchmark implementations (nnUNet for segmentation, YOLOv8 for detection), and baseline performance metrics. Experiments demonstrate substantial improvements in model generalizability and robustness across unseen centers, effectively bridging a critical gap in open, clinically diverse polyp AI research resources.
📝 Abstract
Colonoscopy is the primary method for examination, detection, and removal of polyps. However, challenges such as variations among the endoscopists' skills, bowel quality preparation, and the complex nature of the large intestine contribute to high polyp miss-rate. These missed polyps can develop into cancer later, underscoring the importance of improving the detection methods. To address this gap of lack of publicly available, multi-center large and diverse datasets for developing automatic methods for polyp detection and segmentation, we introduce PolypDB, a large scale publicly available dataset that contains 3934 still polyp images and their corresponding ground truth from real colonoscopy videos. PolypDB comprises images from five modalities: Blue Light Imaging (BLI), Flexible Imaging Color Enhancement (FICE), Linked Color Imaging (LCI), Narrow Band Imaging (NBI), and White Light Imaging (WLI) from three medical centers in Norway, Sweden, and Vietnam. We provide a benchmark on each modality and center, including federated learning settings using popular segmentation and detection benchmarks. PolypDB is public and can be downloaded at url{https://osf.io/pr7ms/}. More information about the dataset, segmentation, detection, federated learning benchmark and train-test split can be found at url{https://github.com/DebeshJha/PolypDB}.