Location Is All You Need: Continuous Spatiotemporal Neural Representations of Earth Observation Data

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional Earth observation relies on raw satellite imagery, which entails high costs in data acquisition and preprocessing and poses challenges in adapting to diverse downstream tasks. This work proposes LIANet, a coordinate-based neural representation method that, for the first time, enables continuous reconstruction of multi-temporal remote sensing images using only spatiotemporal coordinates as input and supports efficient fine-tuning without accessing the original data. By integrating neural radiance field architecture with transfer learning, LIANet substantially lowers the barrier to deploying geospatial foundation models. Experiments demonstrate that its fine-tuned performance on tasks such as semantic segmentation and pixel-level regression matches or rivals that of models trained from scratch or current foundation models, confirming its effectiveness and practical utility.
📝 Abstract
In this work, we present LIANet (Location Is All You Need Network), a coordinate-based neural representation that models multi-temporal spaceborne Earth observation (EO) data for a given region of interest as a continuous spatiotemporal neural field. Given only spatial and temporal coordinates, LIANet reconstructs the corresponding satellite imagery. Once pretrained, this neural representation can be adapted to various EO downstream tasks, such as semantic segmentation or pixel-wise regression, importantly, without requiring access to the original satellite data. LIANet intends to serve as a user-friendly alternative to Geospatial Foundation Models (GFMs) by eliminating the overhead of data access and preprocessing for end-users and enabling fine-tuning solely based on labels. We demonstrate the pretraining of LIANet across target areas of varying sizes and show that fine-tuning it for downstream tasks achieves competitive performance compared to training from scratch or using established GFMs. The source code and datasets are publicly available at https://github.com/mojganmadadi/LIANet/tree/v1.0.1.
Problem

Research questions and friction points this paper is trying to address.

Earth observation
spatiotemporal representation
neural fields
downstream tasks
satellite imagery
Innovation

Methods, ideas, or system contributions that make the work stand out.

coordinate-based neural representation
continuous spatiotemporal neural field
Earth observation
Geospatial Foundation Models
data-free fine-tuning
🔎 Similar Papers
No similar papers found.
M
Mojgan Madadikhaljan
University of the Bundeswehr Munich, Germany
J
Jonathan Prexl
University of the Bundeswehr Munich, Germany
Isabelle Wittmann
Isabelle Wittmann
Research Software Engineer, IBM Research
C
Conrad M Albrecht
Columbia University, USA
Michael Schmitt
Michael Schmitt
Bundeswehr University Munich
Earth ObservationRemote SensingData FusionMachine LearningSynthetic Aperture Radar