🤖 AI Summary
Existing LLM/VLM evaluation benchmarks lack systematic coverage of urban research tasks, constrained by urban data diversity, scenario complexity, and environmental dynamism. To address this gap, we propose CityBench—the first systematic, domain-specific benchmark for urban research—comprising eight tasks across two categories: perception-understanding (e.g., crowd analysis, image reasoning) and decision-making (e.g., geographic forecasting, traffic control). CityBench leverages the multi-source CityData dataset and the fine-grained, dynamic CitySimu simulation platform. Its core innovation is an interactive, simulation-driven evaluation paradigm enabling scalable, multi-city, multi-task assessment. Extensive experiments across 13 cities and 30 state-of-the-art models reveal robust performance on commonsense and semantic understanding tasks, yet expose significant bottlenecks in tasks demanding domain-specific expertise and advanced numerical reasoning—highlighting critical gaps in current urban AI capabilities.
📝 Abstract
As large language models (LLMs) continue to advance and gain widespread use, establishing systematic and reliable evaluation methodologies for LLMs and vision-language models (VLMs) has become essential to ensure their real-world effectiveness and reliability. There have been some early explorations about the usability of LLMs for limited urban tasks, but a systematic and scalable evaluation benchmark is still lacking. The challenge in constructing a systematic evaluation benchmark for urban research lies in the diversity of urban data, the complexity of application scenarios and the highly dynamic nature of the urban environment. In this paper, we design extit{CityBench}, an interactive simulator based evaluation platform, as the first systematic benchmark for evaluating the capabilities of LLMs for diverse tasks in urban research. First, we build extit{CityData} to integrate the diverse urban data and extit{CitySimu} to simulate fine-grained urban dynamics. Based on extit{CityData} and extit{CitySimu}, we design 8 representative urban tasks in 2 categories of perception-understanding and decision-making as the extit{CityBench}. With extensive results from 30 well-known LLMs and VLMs in 13 cities around the world, we find that advanced LLMs and VLMs can achieve competitive performance in diverse urban tasks requiring commonsense and semantic understanding abilities, e.g., understanding the human dynamics and semantic inference of urban images. Meanwhile, they fail to solve the challenging urban tasks requiring professional knowledge and high-level numerical abilities, e.g., geospatial prediction and traffic control task.