RSTeller: Scaling Up Visual Language Modeling in Remote Sensing with Rich Linguistic Semantics from Openly Available Data and Large Language Models

📅 2024-08-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high cost, expert dependency, and scalability limitations of semantic annotation for remote sensing (RS) imagery, this paper proposes an LLM-driven multimodal data synthesis paradigm, introducing RSTeller—a large-scale vision-language dataset for RS. Our method integrates OpenStreetMap-based geographic semantic parsing with Google Earth Engine–enabled image retrieval, leveraging large language models to automatically generate dual-granularity descriptive captions (global scene + local object-level descriptions), eliminating the need for manual annotation while ensuring high-quality image–text pair generation. RSTeller comprises over 1.3 million RS images with corresponding dual captions, making it the largest open-source RS vision-language dataset to date. Extensive experiments demonstrate that pretraining on RSTeller significantly improves the performance of vision-language models—including CLIP and Flamingo—on RS scene understanding tasks, thereby advancing standardization and democratization of RS vision-language modeling.

Technology Category

Application Category

📝 Abstract
Abundant, well-annotated multimodal data in remote sensing are pivotal for aligning complex visual remote sensing (RS) scenes with human language, enabling the development of specialized vision language models across diverse RS interpretation tasks. However, annotating RS images with rich linguistic semantics at scale demands expertise in RS and substantial human labor, making it costly and often impractical. In this study, we propose a workflow that leverages large language models (LLMs) to generate multimodal datasets with semantically rich captions at scale from plain OpenStreetMap (OSM) data for images sourced from the Google Earth Engine (GEE) platform. This approach facilitates the generation of paired remote sensing data and can be readily scaled up using openly available data. Within this framework, we present RSTeller, a multimodal dataset comprising over 1.3 million RS images, each accompanied by two descriptive captions. Extensive experiments demonstrate that RSTeller enhances the performance of multiple existing vision language models for RS scene understanding through continual pre-training. Our methodology significantly reduces the manual effort and expertise needed for annotating remote sensing imagery while democratizing access to high-quality annotated data. This advancement fosters progress in visual language modeling and encourages broader participation in remote sensing research and applications. The RSTeller dataset is available at https://github.com/SlytherinGe/RSTeller.
Problem

Research questions and friction points this paper is trying to address.

Remote Sensing
Automatic Image Annotation
Image Captioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Remote Sensing Imagery
Automated Annotation
🔎 Similar Papers
No similar papers found.
J
Junyao Ge
School of Electronic Engineering, Xidian University, Xi’an, Shaanxi 710071, China
Y
Yang Zheng
School of Electronic Engineering, Xidian University, Xi’an, Shaanxi 710071, China
K
Kaitai Guo
School of Electronic Engineering, Xidian University, Xi’an, Shaanxi 710071, China
Jimin Liang
Jimin Liang
School of Electronic Engineering, Xidian University, Xi’an, Shaanxi 710071, China