CityBench: Evaluating the Capabilities of Large Language Models for Urban Tasks

📅 2024-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM/VLM evaluation benchmarks lack systematic coverage of urban research tasks, constrained by urban data diversity, scenario complexity, and environmental dynamism. To address this gap, we propose CityBench—the first systematic, domain-specific benchmark for urban research—comprising eight tasks across two categories: perception-understanding (e.g., crowd analysis, image reasoning) and decision-making (e.g., geographic forecasting, traffic control). CityBench leverages the multi-source CityData dataset and the fine-grained, dynamic CitySimu simulation platform. Its core innovation is an interactive, simulation-driven evaluation paradigm enabling scalable, multi-city, multi-task assessment. Extensive experiments across 13 cities and 30 state-of-the-art models reveal robust performance on commonsense and semantic understanding tasks, yet expose significant bottlenecks in tasks demanding domain-specific expertise and advanced numerical reasoning—highlighting critical gaps in current urban AI capabilities.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) continue to advance and gain widespread use, establishing systematic and reliable evaluation methodologies for LLMs and vision-language models (VLMs) has become essential to ensure their real-world effectiveness and reliability. There have been some early explorations about the usability of LLMs for limited urban tasks, but a systematic and scalable evaluation benchmark is still lacking. The challenge in constructing a systematic evaluation benchmark for urban research lies in the diversity of urban data, the complexity of application scenarios and the highly dynamic nature of the urban environment. In this paper, we design extit{CityBench}, an interactive simulator based evaluation platform, as the first systematic benchmark for evaluating the capabilities of LLMs for diverse tasks in urban research. First, we build extit{CityData} to integrate the diverse urban data and extit{CitySimu} to simulate fine-grained urban dynamics. Based on extit{CityData} and extit{CitySimu}, we design 8 representative urban tasks in 2 categories of perception-understanding and decision-making as the extit{CityBench}. With extensive results from 30 well-known LLMs and VLMs in 13 cities around the world, we find that advanced LLMs and VLMs can achieve competitive performance in diverse urban tasks requiring commonsense and semantic understanding abilities, e.g., understanding the human dynamics and semantic inference of urban images. Meanwhile, they fail to solve the challenging urban tasks requiring professional knowledge and high-level numerical abilities, e.g., geospatial prediction and traffic control task.
Problem

Research questions and friction points this paper is trying to address.

Lack systematic scalable benchmark urban LLM evaluation
Diverse complex dynamic urban data challenges benchmarking
Assessing LLM capabilities urban perception decision-making tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive simulator for urban LLM evaluation
Integrated diverse urban data and dynamics
Benchmark with 8 tasks in 2 categories
🔎 Similar Papers
No similar papers found.
J
Jie Feng
Department of Electronic Engineering, Tsinghua University, Beijing, China
J
Jun Zhang
Department of Electronic Engineering, Tsinghua University, Beijing, China
Tianhui Liu
Tianhui Liu
Hong Kong University of Science and Technology (Guangzhou), Tsinghua University
Large Language ModelUrban ScienceSpatial Intelligence
X
Xin Zhang
Department of Electronic Engineering, Tsinghua University, Beijing, China
Tianjian Ouyang
Tianjian Ouyang
Tsinghua University
J
Junbo Yan
Department of Electronic Engineering, Tsinghua University, Beijing, China
Yuwei Du
Yuwei Du
Tsinghua University
trajectory modelling
Siqi Guo
Siqi Guo
PhD Student, Purdue University
HAIHCIVRintelligent virtual agentsembodied conversational agents
Y
Yong Li
Department of Electronic Engineering, Tsinghua University, Beijing, China