🤖 AI Summary
This study addresses the challenges of scarce labeled data and limited transferability of general-purpose models in ocean remote sensing. The authors propose OceanMAE, the first self-supervised masked autoencoder that integrates physics-driven oceanic prior knowledge. During pretraining on Sentinel-2 multispectral imagery, OceanMAE incorporates physically meaningful auxiliary ocean features to achieve domain-aligned representation learning. Built upon an enhanced UNet architecture, the method significantly improves performance in marine pollutant segmentation on the MARIDA and MADOS datasets and demonstrates strong results in bathymetric regression on MagicBathyNet. These findings validate the effectiveness and generalizability of incorporating oceanic prior knowledge into remote sensing representation learning.
📝 Abstract
Accurate ocean mapping is essential for applications such as bathymetry estimation, seabed characterization, marine litter detection, and ecosystem monitoring. However, ocean remote sensing (RS) remains constrained by limited labeled data and by the reduced transferability of models pre-trained mainly on land-dominated Earth observation imagery. In this paper, we propose OceanMAE, an ocean-specific masked autoencoder that extends standard MAE pre-training by integrating multispectral Sentinel-2 observations with physically meaningful ocean descriptors during self-supervised learning. By incorporating these auxiliary ocean features, OceanMAE is designed to learn more informative and ocean-aware latent representations from large- scale unlabeled data. To transfer these representations to downstream applications, we further employ a modified UNet-based framework for marine segmentation and bathymetry estimation. Pre-trained on the Hydro dataset, OceanMAE is evaluated on MADOS and MARIDA for marine pollutant and debris segmentation, and on MagicBathyNet for bathymetry regression. The experiments show that OceanMAE yields the strongest gains on marine segmentation, while bathymetry benefits are competitive and task-dependent. In addition, an ablation against a standard MAE on MARIDA indicates that incorporating auxiliary ocean descriptors during pre-training improves downstream segmentation quality. These findings highlight the value of physically informed and domain-aligned self-supervised pre- training for ocean RS. Code and weights are publicly available at https://git.tu-berlin.de/joanna.stamer/SSLORS2.