🤖 AI Summary
Existing LLM-based individual mobility prediction models suffer from poor generalization and limited transferability across cities or users. To address this, we propose the first open-source large language model (LLM) foundation model specifically designed for mobility forecasting. Our approach introduces a unified instruction-tuning framework that integrates multi-city real-world trajectory data with context-aware prompting techniques, enabling robust modeling of heterogeneous urban environments and diverse user behaviors. This work establishes the first general-purpose mobility forecasting foundation model built upon an open-source LLM, significantly enhancing cross-domain adaptability and contextual robustness. Extensive experiments across six real-world datasets demonstrate state-of-the-art performance in both predictive accuracy and cross-city transferability. Our model provides a scalable, reusable paradigm for intelligent mobility modeling, advancing the frontier of LLM-based spatiotemporal forecasting.
📝 Abstract
Large Language Models (LLMs) are widely applied to domain-specific tasks due to their massive general knowledge and remarkable inference capacities. Current studies on LLMs have shown immense potential in applying LLMs to model individual mobility prediction problems. However, most LLM-based mobility prediction models only train on specific datasets or use single well-designed prompts, leading to difficulty in adapting to different cities and users with diverse contexts. To fill these gaps, this paper proposes a unified fine-tuning framework to train a foundational open source LLM-based mobility prediction model. We conducted extensive experiments on six real-world mobility datasets to validate the proposed model. The results showed that the proposed model achieved the best performance in prediction accuracy and transferability over state-of-the-art models based on deep learning and LLMs.