MoE-TransMov: A Transformer-based Model for Next POI Prediction in Familiar & Unfamiliar Movements

πŸ“… 2025-12-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing POI prediction methods struggle to distinguish between users’ mobility patterns in familiar versus unfamiliar regions, resulting in poor generalizability. To address this, we propose the first unified dynamic gating architecture that jointly models both scenarios, integrating Mixture-of-Experts (MoE) with Transformer-based sequence modeling. A learnable gating network adaptively identifies mobility context and enables fine-grained decoupled modeling of familiar and unfamiliar regions via expert routing. Region familiarity is quantified from check-in data, and the model employs multi-head self-attention alongside dual parallel expert decoders. Extensive experiments on Foursquare NYC and Kyoto datasets demonstrate significant improvements over state-of-the-art methods across Top-1/5/10 accuracy and Mean Reciprocal Rank (MRR), validating strong cross-scenario generalization and practical effectiveness.

Technology Category

Application Category

πŸ“ Abstract
Accurate prediction of the next point of interest (POI) within human mobility trajectories is essential for location-based services, as it enables more timely and personalized recommendations. In particular, with the rise of these approaches, studies have shown that users exhibit different POI choices in their familiar and unfamiliar areas, highlighting the importance of incorporating user familiarity into predictive models. However, existing methods often fail to distinguish between the movements of users in familiar and unfamiliar regions. To address this, we propose MoE-TransMov, a Transformer-based model with a Transformer model with a Mixture-of-Experts (MoE) architecture designed to use one framework to capture distinct mobility patterns across different moving contexts without requiring separate training for certain data. Using user-check-in data, we classify movements into familiar and unfamiliar categories and develop a specialized expert network to improve prediction accuracy. Our approach integrates self-attention mechanisms and adaptive gating networks to dynamically select the most relevant expert models for different mobility contexts. Experiments on two real-world datasets, including the widely used but small open-source Foursquare NYC dataset and the large-scale Kyoto dataset collected with LY Corporation (Yahoo Japan Corporation), show that MoE-TransMov outperforms state-of-the-art baselines with notable improvements in Top-1, Top-5, Top-10 accuracy, and mean reciprocal rank (MRR). Given the results, we find that by using this approach, we can efficiently improve mobility predictions under different moving contexts, thereby enhancing the personalization of recommendation systems and advancing various urban applications.
Problem

Research questions and friction points this paper is trying to address.

Predicts next POI in familiar and unfamiliar mobility contexts
Incorporates user familiarity into mobility pattern modeling
Uses MoE-Transformer to capture distinct movement patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based model with Mixture-of-Experts architecture
Dynamically selects expert models using self-attention and gating
Classifies movements into familiar and unfamiliar categories
πŸ”Ž Similar Papers
No similar papers found.