🤖 AI Summary
Image geolocation faces challenges of high visual similarity and vast search spaces, leading to inaccurate localization. This paper proposes a hierarchical sequence prediction framework: geographic coordinates are encoded as S2 grid hierarchy sequences, and predictions are refined autoregressively level-by-level—from coarse regions to fine-grained coordinates—enabling end-to-end localization. It is the first work to formulate geolocation as a hierarchical token prediction task, eliminating the need for explicit semantic partitioning. To enhance inference accuracy, we incorporate beam search and multi-sample decoding, inspired by test-time compute scaling strategies from large language models. Our method achieves significant improvements over prior approaches on Im2GPS3k and YFCC4k: +13.9% top-1 accuracy without multimodal LLMs (MLLMs), and state-of-the-art performance across all metrics when integrated with MLLMs.
📝 Abstract
Image geolocalization, the task of determining an image's geographic origin, poses significant challenges, largely due to visual similarities across disparate locations and the large search space. To address these issues, we propose a hierarchical sequence prediction approach inspired by how humans narrow down locations from broad regions to specific addresses. Analogously, our model predicts geographic tokens hierarchically, first identifying a general region and then sequentially refining predictions to increasingly precise locations. Rather than relying on explicit semantic partitions, our method uses S2 cells, a nested, multiresolution global grid, and sequentially predicts finer-level cells conditioned on visual inputs and previous predictions. This procedure mirrors autoregressive text generation in large language models. Much like in language modeling, final performance depends not only on training but also on inference-time strategy. We investigate multiple top-down traversal methods for autoregressive sampling, incorporating techniques from test-time compute scaling used in language models. Specifically, we integrate beam search and multi-sample inference while exploring various selection strategies to determine the final output. This enables the model to manage uncertainty by exploring multiple plausible paths through the hierarchy. We evaluate our method on the Im2GPS3k and YFCC4k datasets against two distinct sets of baselines: those that operate without a Multimodal Large Language Model (MLLM) and those that leverage one. In the MLLM-free setting, our model surpasses other comparable baselines on nearly all metrics, achieving state-of-the-art performance with accuracy gains of up to 13.9%. When augmented with an MLLM, our model outperforms all baselines, setting a new state-of-the-art across all metrics. The source code is available at https://github.com/NNargesNN/GeoToken.