🤖 AI Summary
To address the challenge of modeling dynamic land-surface evolution in remote sensing time-series change detection—where existing vision-language models (VLMs) exhibit limited temporal reasoning capability—this paper introduces GeoVideoPairs, the first geographically grounded, temporally aligned video-frame-pair dataset with fine-grained change annotations. We propose a lightweight, multi-stage fine-tuning framework integrating LoRA, QLoRA, and structured pruning, enabling efficient adaptation of Video-LLaVA and LLaVA-NeXT-Video to remote sensing time-series tasks—the first such adaptation for these models. Our method significantly enhances fine-grained perception of land-use changes and generation of natural language descriptions thereof. On standard benchmarks, it achieves BERTScore 0.864 and ROUGE-1 0.576, substantially outperforming prior baselines. This work establishes a new paradigm for VLM-driven dynamic remote sensing understanding and provides both a foundational benchmark dataset and a scalable adaptation methodology.
📝 Abstract
Detecting temporal changes in geographical landscapes is critical for applications like environmental monitoring and urban planning. While remote sensing data is abundant, existing vision-language models (VLMs) often fail to capture temporal dynamics effectively. This paper addresses these limitations by introducing an annotated dataset of video frame pairs to track evolving geographical patterns over time. Using fine-tuning techniques like Low-Rank Adaptation (LoRA), quantized LoRA (QLoRA), and model pruning on models such as Video-LLaVA and LLaVA-NeXT-Video, we significantly enhance VLM performance in processing remote sensing temporal changes. Results show significant improvements, with the best performance achieving a BERT score of 0.864 and ROUGE-1 score of 0.576, demonstrating superior accuracy in describing land-use transformations.