CityGPT: Empowering Urban Spatial Cognition of Large Language Models

📅 2024-06-20
🏛️ arXiv.org
📈 Citations: 14
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited spatial reasoning capabilities in urban geospatial tasks due to insufficient grounding in physical-world knowledge. Method: This paper proposes a city-scale “world model” construction framework comprising: (1) CityInstruction—the first instruction-tuning dataset explicitly designed for urban spatial reasoning; (2) Self-Weighted Fine-Tuning (SWFT), a novel fine-tuning strategy that enhances domain-specific competence without compromising general-purpose performance; and (3) CityEval—the first text-based benchmark for evaluating urban spatial reasoning. Contribution/Results: When applied to open-source small LLMs (e.g., ChatGLM3-6B), our approach achieves a 32.7% average accuracy gain on CityEval—outperforming several proprietary large models—while preserving baseline general capabilities. To our knowledge, this is the first work to systematically integrate spatial cognitive modeling into the LLM training paradigm, establishing a foundational pathway toward urban intelligent agents.

Technology Category

Application Category

📝 Abstract
Large language models(LLMs), with their powerful language generation and reasoning capabilities, have already achieved notable success in many domains, e.g., math and code generation. However, they often fall short when tackling real-life geospatial tasks within urban environments. This limitation stems from a lack of physical world knowledge and relevant data during training. To address this gap, we propose extit{CityGPT}, a systematic framework designed to enhance LLMs' understanding of urban space and improve their ability to solve the related urban tasks by integrating a city-scale `world model' into the model. Firstly, we construct a diverse instruction tuning dataset, extit{CityInstruction}, for injecting urban knowledge into LLMs and effectively boosting their spatial reasoning capabilities. Using a combination of extit{CityInstruction} and open source general instruction data, we introduce a novel and easy-to-use self-weighted fine-tuning method ( extit{SWFT}) to train various LLMs (including ChatGLM3-6B, Llama3-8B, and Qwen2.5-7B) to enhance their urban spatial capabilities without compromising, or even improving, their general abilities. Finally, to validate the effectiveness of our proposed framework, we develop a comprehensive text-based spatial benchmark extit{CityEval} for evaluating the performance of LLMs across a wide range of urban scenarios and geospatial tasks. Extensive evaluation results demonstrate that smaller LLMs trained with extit{CityInstruction} by extit{SWFT} method can achieve performance that is competitive with, and in some cases superior to, proprietary LLMs when assessed using extit{CityEval}.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs' urban spatial cognition and task-solving abilities
Integrating city-scale world models to overcome geospatial knowledge gaps
Developing CityInstruction and SWFT for improved spatial reasoning in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates city-scale world model into LLMs
Uses diverse urban instruction dataset CityInstruction
Self-weighted fine-tuning method SWFT for training