IceBench: A Benchmark for Deep Learning based Sea Ice Type Classification

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The sea ice type classification community lacks standardized benchmarks and systematic model comparisons. Method: We introduce IceBench, the first open-source benchmark dedicated to this task, built upon the AI4Arctic dataset. It uniformly supports pixel-wise and patch-wise deep learning methods, evaluating 12 state-of-the-art semantic segmentation models—including U-Net and SegFormer—using multi-dimensional metrics (IoU, F1-score, Overall Accuracy). IceBench further enables the first empirical analysis of cross-temporal/spatial transfer, data downscaling, and preprocessing strategies to identify generalization bottlenecks. Implemented in PyTorch, it integrates remote sensing image preprocessing and domain adaptation modules to ensure reproducibility and extensibility. Contribution/Results: All code, evaluation protocols, and benchmark results are publicly released, establishing a standardized evaluation framework and collaborative foundation for intelligent sea ice interpretation.

Technology Category

Application Category

📝 Abstract
Sea ice plays a critical role in the global climate system and maritime operations, making timely and accurate classification essential. However, traditional manual methods are time-consuming, costly, and have inherent biases. Automating sea ice type classification addresses these challenges by enabling faster, more consistent, and scalable analysis. While both traditional and deep learning approaches have been explored, deep learning models offer a promising direction for improving efficiency and consistency in sea ice classification. However, the absence of a standardized benchmark and comparative study prevents a clear consensus on the best-performing models. To bridge this gap, we introduce extit{IceBench}, a comprehensive benchmarking framework for sea ice type classification. Our key contributions are threefold: First, we establish the IceBench benchmarking framework which leverages the existing AI4Arctic Sea Ice Challenge dataset as a standardized dataset, incorporates a comprehensive set of evaluation metrics, and includes representative models from the entire spectrum of sea ice type classification methods categorized in two distinct groups, namely, pixel-based classification methods and patch-based classification methods. IceBench is open-source and allows for convenient integration and evaluation of other sea ice type classification methods; hence, facilitating comparative evaluation of new methods and improving reproducibility in the field. Second, we conduct an in-depth comparative study on representative models to assess their strengths and limitations, providing insights for both practitioners and researchers. Third, we leverage IceBench for systematic experiments addressing key research questions on model transferability across seasons (time) and locations (space), data downscaling, and preprocessing strategies.
Problem

Research questions and friction points this paper is trying to address.

Automating sea ice classification to replace manual methods
Lack of standardized benchmark for deep learning models
Evaluating model transferability across seasons and locations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized benchmark for sea ice classification
Comprehensive evaluation metrics and models
Systematic experiments on model transferability
🔎 Similar Papers
No similar papers found.
Samira Alkaee Taleghan
Samira Alkaee Taleghan
PhD Student, University of Colorado Denver
deep learningcomputer vision
A
Andrew P. Barrett
National Snow and Ice Data Center, CIRES, University of Colorado Boulder, Boulder, CO, USA
W
Walter N. Meier
National Snow and Ice Data Center, CIRES, University of Colorado Boulder, Boulder, CO, USA
Farnoush Banaei-Kashani
Farnoush Banaei-Kashani
Associate Proferssor of Computer Science and Engineering, University of Colorado Denver
Big Data Management and Mining