🤖 AI Summary
Current vision-language models exhibit limited performance in spatial reasoning tasks that require aligning abstract top-down maps with first-person street-view images. This work proposes the first large-scale, geographically diverse, and ambiguity-controlled Map-to-StreetView benchmark, constructed by integrating geographic information system data with paired street-view imagery to form structured reasoning trajectories suitable for supervised fine-tuning (SFT) and reinforcement learning. Experimental results show that even the best-performing model achieves only 65.2% accuracy—substantially below the human performance of 95%. Although SFT and reinforcement learning yield consistent improvements, models still suffer from limited generalization across benchmarks. This benchmark provides a systematic platform for evaluating and advancing spatial alignment and reasoning capabilities in multimodal models.
📝 Abstract
Vision--language models (VLMs) achieve strong performance on many multimodal benchmarks but remain brittle on spatial reasoning tasks that require aligning abstract overhead representations with egocentric views. We introduce m2sv, a scalable benchmark for map-to-street-view spatial reasoning that asks models to infer camera viewing direction by aligning a north-up overhead map with a Street View image captured at the same real-world intersection. We release m2sv-20k, a geographically diverse benchmark with controlled ambiguity, along with m2sv-sft-11k, a curated set of structured reasoning traces for supervised fine-tuning. Despite strong performance on existing multimodal benchmarks, the best evaluated VLM achieves only 65.2% accuracy on m2sv, far below the human baseline of 95%. While supervised fine-tuning and reinforcement learning yield consistent gains, cross-benchmark evaluations reveal limited transfer. Beyond aggregate accuracy, we systematically analyze difficulty in map-to-street-view reasoning using both structural signals and human effort, and conduct an extensive failure analysis of adapted open models. Our findings highlight persistent gaps in geometric alignment, evidence aggregation, and reasoning consistency, motivating future work on grounded spatial reasoning across viewpoints.