Context-Aware Multimodal Representation Learning for Spatio-Temporally Explicit Environmental modelling

πŸ“… 2025-11-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing Earth observation foundation models are constrained by fixed spatiotemporal scales, struggling to simultaneously achieve high spatial detail and high temporal fidelity. To address this, we propose a two-stage multimodal representation learning framework that jointly models Sentinel-1 (SAR) and Sentinel-2 (optical) data. While preserving the architectural independence of each modality’s encoder, our method leverages self-supervised learning and a cross-modal fusion network to construct a shared, consistent high-resolution feature space. The resulting embeddings achieve 10-meter spatial resolution, cloud-free reconstruction, and daily temporal continuity. Evaluated on global gross primary production (GPP) modeling, our approach significantly improves ecological interpretability and spatiotemporal consistency. This work marks the first successful synergistic representation of multi-source remote sensing data for fine-grained ecosystem dynamic modeling.

Technology Category

Application Category

πŸ“ Abstract
Earth observation (EO) foundation models have emerged as an effective approach to derive latent representations of the Earth system from various remote sensing sensors. These models produce embeddings that can be used as analysis-ready datasets, enabling the modelling of ecosystem dynamics without extensive sensor-specific preprocessing. However, existing models typically operate at fixed spatial or temporal scales, limiting their use for ecological analyses that require both fine spatial detail and high temporal fidelity. To overcome these limitations, we propose a representation learning framework that integrates different EO modalities into a unified feature space at high spatio-temporal resolution. We introduce the framework using Sentinel-1 and Sentinel-2 data as representative modalities. Our approach produces a latent space at native 10 m resolution and the temporal frequency of cloud-free Sentinel-2 acquisitions. Each sensor is first modeled independently to capture its sensor-specific characteristics. Their representations are then combined into a shared model. This two-stage design enables modality-specific optimisation and easy extension to new sensors, retaining pretrained encoders while retraining only fusion layers. This enables the model to capture complementary remote sensing data and to preserve coherence across space and time. Qualitative analyses reveal that the learned embeddings exhibit high spatial and semantic consistency across heterogeneous landscapes. Quantitative evaluation in modelling Gross Primary Production reveals that they encode ecologically meaningful patterns and retain sufficient temporal fidelity to support fine-scale analyses. Overall, the proposed framework provides a flexible, analysis-ready representation learning approach for environmental applications requiring diverse spatial and temporal resolutions.
Problem

Research questions and friction points this paper is trying to address.

Overcoming fixed spatial-temporal scale limitations in Earth observation foundation models
Integrating multiple Earth observation modalities into unified high-resolution feature space
Enabling ecological analyses requiring both fine spatial detail and high temporal fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates multimodal Earth observation data into unified feature space
Uses two-stage design for modality-specific optimization and fusion
Produces high-resolution spatio-temporal embeddings for ecological analysis
πŸ”Ž Similar Papers
No similar papers found.
J
Julia Peters
Environmental Data Science and Remote Sensing Group; Institute for Earth System Science and Remote Sensing, Leipzig University, Germany
K
Karin Mora
Environmental Data Science and Remote Sensing Group; Institute for Earth System Science and Remote Sensing, Leipzig University, Germany
Miguel D. Mahecha
Miguel D. Mahecha
Environmental Data Science and Remote Sensing Group; Institute for Earth System Science and Remote Sensing, Leipzig University, Germany
C
Chaonan Ji
Environmental Data Science and Remote Sensing Group; Institute for Earth System Science and Remote Sensing, Leipzig University, Germany
David Montero
David Montero
Leipzig University, Institute for Earth System Science and Remote Sensing
Remote SensingMachine LearningGeomaticsData ScienceGoogle Earth Engine
C
Clemens Mosig
Environmental Data Science and Remote Sensing Group; Institute for Earth System Science and Remote Sensing, Leipzig University, Germany
Guido Kraemer
Guido Kraemer
Environmental Data Science and Remote Sensing Group; Institute for Earth System Science and Remote Sensing, Leipzig University, Germany