🤖 AI Summary
This work addresses the absence of evaluation benchmarks capable of assessing large language models’ ability to perform geospatial reasoning over long contexts and integrate multi-source, heterogeneous information in realistic, complex scenarios—particularly in high-stakes military decision-making. To this end, we introduce MilSCORE, the first scenario-level benchmark for long-context geospatial reasoning, built upon expert-designed simulated military scenarios that incorporate multimodal data including maps, directives, and intelligence reports. MilSCORE features a multi-hop question-answering task spanning seven question types, explicitly emphasizing multi-hop reasoning, constraint analysis, and strategic planning. Baseline evaluations on several prominent vision-language models reveal limited performance, underscoring MilSCORE’s value as a challenging and realistic evaluation platform.
📝 Abstract
As large language models (LLMs) are applied to increasingly longer and more complex tasks, there is a growing need for realistic long-context benchmarks that require selective reading and integration of heterogeneous, multi-modal information sources. This need is especially acute for geospatial planning problems, such as those found in planning for large-scale military operations, which demand fast and accurate reasoning over maps, orders, intelligence reports, and other distributed data. To address this gap, we present MilSCORE (Military Scenario Contextual Reasoning), to our knowledge the first scenario-level dataset of expert-authored, multi-hop questions grounded in a complex, simulated military planning scenario used for training. MilSCORE is designed to evaluate high-stakes decision-making and planning, probing LLMs'ability to combine tactical and spatial reasoning across multiple sources and to reason over long-horizon, geospatially rich context. The benchmark includes a diverse set of question types across seven categories targeting both factual recall and multi-step reasoning about constraints, strategy, and spatial analysis. We provide an evaluation protocol and report baseline results for a range of contemporary vision-language models. Our findings highlight substantial headroom on MilSCORE, indicating that current systems struggle with realistic, scenario-level long-context planning, and positioning MilSCORE as a challenging testbed for future work.