🤖 AI Summary
Traditional Earth observation relies on raw satellite imagery, which entails high costs in data acquisition and preprocessing and poses challenges in adapting to diverse downstream tasks. This work proposes LIANet, a coordinate-based neural representation method that, for the first time, enables continuous reconstruction of multi-temporal remote sensing images using only spatiotemporal coordinates as input and supports efficient fine-tuning without accessing the original data. By integrating neural radiance field architecture with transfer learning, LIANet substantially lowers the barrier to deploying geospatial foundation models. Experiments demonstrate that its fine-tuned performance on tasks such as semantic segmentation and pixel-level regression matches or rivals that of models trained from scratch or current foundation models, confirming its effectiveness and practical utility.
📝 Abstract
In this work, we present LIANet (Location Is All You Need Network), a coordinate-based neural representation that models multi-temporal spaceborne Earth observation (EO) data for a given region of interest as a continuous spatiotemporal neural field. Given only spatial and temporal coordinates, LIANet reconstructs the corresponding satellite imagery. Once pretrained, this neural representation can be adapted to various EO downstream tasks, such as semantic segmentation or pixel-wise regression, importantly, without requiring access to the original satellite data. LIANet intends to serve as a user-friendly alternative to Geospatial Foundation Models (GFMs) by eliminating the overhead of data access and preprocessing for end-users and enabling fine-tuning solely based on labels. We demonstrate the pretraining of LIANet across target areas of varying sizes and show that fine-tuning it for downstream tasks achieves competitive performance compared to training from scratch or using established GFMs. The source code and datasets are publicly available at https://github.com/mojganmadadi/LIANet/tree/v1.0.1.