🤖 AI Summary
Existing location prediction methods struggle to model the multifunctional semantics of places and the heterogeneity of user mobility behaviors. To address this, we propose NextLocMoE, a novel Mixture-of-Experts (MoE) framework featuring a hierarchical two-tier architecture: an outer layer leverages large language models (LLMs) to partition experts at the location semantic level, while an inner layer embeds trajectory-driven, personalized expert routing within a Transformer backbone to capture user-specific mobility patterns. We further introduce a history-aware routing mechanism to enhance routing stability and interpretability. Evaluated on multiple real-world urban trajectory datasets, NextLocMoE achieves substantial improvements in prediction accuracy (average +12.7%), cross-domain generalization, and semantic interpretability. Our approach establishes a new paradigm for human mobility modeling that jointly advances semantic depth and individual adaptability.
📝 Abstract
Next location prediction plays a critical role in understanding human mobility patterns. However, existing approaches face two core limitations: (1) they fall short in capturing the complex, multi-functional semantics of real-world locations; and (2) they lack the capacity to model heterogeneous behavioral dynamics across diverse user groups. To tackle these challenges, we introduce NextLocMoE, a novel framework built upon large language models (LLMs) and structured around a dual-level Mixture-of-Experts (MoE) design. Our architecture comprises two specialized modules: a Location Semantics MoE that operates at the embedding level to encode rich functional semantics of locations, and a Personalized MoE embedded within the Transformer backbone to dynamically adapt to individual user mobility patterns. In addition, we incorporate a history-aware routing mechanism that leverages long-term trajectory data to enhance expert selection and ensure prediction stability. Empirical evaluations across several real-world urban datasets show that NextLocMoE achieves superior performance in terms of predictive accuracy, cross-domain generalization, and interpretability