🤖 AI Summary
Multimodal large language models (MLLMs) exhibit insufficient spatial reasoning capabilities and poor zero-shot generalization for fine-grained object localization in Earth observation imagery. Method: This work presents the first systematic evaluation of next-generation MLLMs with explicit spatial modeling capabilities on remote sensing zero-shot localization tasks. We propose a ground sampling distance (GSD)-adaptive image preprocessing strategy and a spatial-relation-aware prompt engineering framework. Contribution/Results: Experiments demonstrate that optimized MLLMs achieve practical-level zero-shot performance across multiple remote sensing localization benchmarks—reducing average localization error by 32%. Furthermore, we identify three critical failure modes: scale mismatch, directional ambiguity, and topological relation misjudgment. The study establishes a reproducible technical pathway and design principles for deploying MLLMs in geospatial understanding tasks.
📝 Abstract
Multimodal large language models (MLLMs) have altered the landscape of computer vision, obtaining impressive results across a wide range of tasks, especially in zero-shot settings. Unfortunately, their strong performance does not always transfer to out-of-distribution domains, such as earth observation (EO) imagery. Prior work has demonstrated that MLLMs excel at some EO tasks, such as image captioning and scene understanding, while failing at tasks that require more fine-grained spatial reasoning, such as object localization. However, MLLMs are advancing rapidly and insights quickly become out-dated. In this work, we analyze more recent MLLMs that have been explicitly trained to include fine-grained spatial reasoning capabilities, benchmarking them on EO object localization tasks. We demonstrate that these models are performant in certain settings, making them well suited for zero-shot scenarios. Additionally, we provide a detailed discussion focused on prompt selection, ground sample distance (GSD) optimization, and analyzing failure cases. We hope that this work will prove valuable as others evaluate whether an MLLM is well suited for a given EO localization task and how to optimize it.