OceanMAE: A Foundation Model for Ocean Remote Sensing

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenges of scarce labeled data and limited transferability of general-purpose models in ocean remote sensing. The authors propose OceanMAE, the first self-supervised masked autoencoder that integrates physics-driven oceanic prior knowledge. During pretraining on Sentinel-2 multispectral imagery, OceanMAE incorporates physically meaningful auxiliary ocean features to achieve domain-aligned representation learning. Built upon an enhanced UNet architecture, the method significantly improves performance in marine pollutant segmentation on the MARIDA and MADOS datasets and demonstrates strong results in bathymetric regression on MagicBathyNet. These findings validate the effectiveness and generalizability of incorporating oceanic prior knowledge into remote sensing representation learning.
📝 Abstract
Accurate ocean mapping is essential for applications such as bathymetry estimation, seabed characterization, marine litter detection, and ecosystem monitoring. However, ocean remote sensing (RS) remains constrained by limited labeled data and by the reduced transferability of models pre-trained mainly on land-dominated Earth observation imagery. In this paper, we propose OceanMAE, an ocean-specific masked autoencoder that extends standard MAE pre-training by integrating multispectral Sentinel-2 observations with physically meaningful ocean descriptors during self-supervised learning. By incorporating these auxiliary ocean features, OceanMAE is designed to learn more informative and ocean-aware latent representations from large- scale unlabeled data. To transfer these representations to downstream applications, we further employ a modified UNet-based framework for marine segmentation and bathymetry estimation. Pre-trained on the Hydro dataset, OceanMAE is evaluated on MADOS and MARIDA for marine pollutant and debris segmentation, and on MagicBathyNet for bathymetry regression. The experiments show that OceanMAE yields the strongest gains on marine segmentation, while bathymetry benefits are competitive and task-dependent. In addition, an ablation against a standard MAE on MARIDA indicates that incorporating auxiliary ocean descriptors during pre-training improves downstream segmentation quality. These findings highlight the value of physically informed and domain-aligned self-supervised pre- training for ocean RS. Code and weights are publicly available at https://git.tu-berlin.de/joanna.stamer/SSLORS2.
Problem

Research questions and friction points this paper is trying to address.

ocean remote sensing
limited labeled data
transferability
domain shift
self-supervised learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

OceanMAE
masked autoencoder
self-supervised learning
ocean remote sensing
physically-informed pre-training
🔎 Similar Papers
No similar papers found.
V
Viola-Joanna Stamer
Faculty of Electrical Engineering and Computer Science, Technische Universität Berlin, Germany and Berlin Institute for the Foundations of Learning and Data (BIFOLD), 10623 Berlin, Germany
Panagiotis Agrafiotis
Panagiotis Agrafiotis
Postdoctoral Researcher - Marie Skłodowska-Curie Fellow, BIFOLD and Faculty of EECS, TU Berlin
3D Computer VisionPhotogrammetryRemote SensingImage AnalysisSeabed Mapping
B
Behnood Rasti
Faculty of Electrical Engineering and Computer Science, Technische Universität Berlin, Germany and Berlin Institute for the Foundations of Learning and Data (BIFOLD), 10623 Berlin, Germany
Begüm Demir
Begüm Demir
Professor, BIFOLD and Faculty of EECS, Technische Universität Berlin
Remote SensingMachine LearningImage AnalysisSignal Processing