Mil-SCORE: Benchmarking Long-Context Geospatial Reasoning and Planning in Large Language Models

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of evaluation benchmarks capable of assessing large language models’ ability to perform geospatial reasoning over long contexts and integrate multi-source, heterogeneous information in realistic, complex scenarios—particularly in high-stakes military decision-making. To this end, we introduce MilSCORE, the first scenario-level benchmark for long-context geospatial reasoning, built upon expert-designed simulated military scenarios that incorporate multimodal data including maps, directives, and intelligence reports. MilSCORE features a multi-hop question-answering task spanning seven question types, explicitly emphasizing multi-hop reasoning, constraint analysis, and strategic planning. Baseline evaluations on several prominent vision-language models reveal limited performance, underscoring MilSCORE’s value as a challenging and realistic evaluation platform.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are applied to increasingly longer and more complex tasks, there is a growing need for realistic long-context benchmarks that require selective reading and integration of heterogeneous, multi-modal information sources. This need is especially acute for geospatial planning problems, such as those found in planning for large-scale military operations, which demand fast and accurate reasoning over maps, orders, intelligence reports, and other distributed data. To address this gap, we present MilSCORE (Military Scenario Contextual Reasoning), to our knowledge the first scenario-level dataset of expert-authored, multi-hop questions grounded in a complex, simulated military planning scenario used for training. MilSCORE is designed to evaluate high-stakes decision-making and planning, probing LLMs'ability to combine tactical and spatial reasoning across multiple sources and to reason over long-horizon, geospatially rich context. The benchmark includes a diverse set of question types across seven categories targeting both factual recall and multi-step reasoning about constraints, strategy, and spatial analysis. We provide an evaluation protocol and report baseline results for a range of contemporary vision-language models. Our findings highlight substantial headroom on MilSCORE, indicating that current systems struggle with realistic, scenario-level long-context planning, and positioning MilSCORE as a challenging testbed for future work.
Problem

Research questions and friction points this paper is trying to address.

long-context reasoning
geospatial planning
multi-modal information
military operations
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

long-context reasoning
geospatial planning
multi-hop question answering
military scenario benchmark
multimodal integration
A
Aadi Palnitkar
University of Maryland, College Park MD, USA
M
Mingyang Mao
ERA Lab, University of South Florida, Tampa FL, USA; EEHPC Lab, Johns Hopkins University, Baltimore MD, USA
N
Nicholas R. Waytowich
DEVCOM Army Research Laboratory, Aberdeen Proving Ground MD, USA
Vinicius G. Goecks
Vinicius G. Goecks
U.S. Army DEVCOM Army Research Laboratory
Machine LearningArtificial IntelligenceHuman-Robot InteractionRoboticsReinforcement Learning
T
T. Mohsenin
EEHPC Lab, Johns Hopkins University, Baltimore MD, USA
Xiaomin Lin
Xiaomin Lin
Assistant Prof, University of South Florida
AI for goodRobotics for scienceRobotics for good