RALLM-POI: Retrieval-Augmented LLM for Zero-shot Next POI Recommendation with Geographical Reranking

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak geospatial relevance and excessive generalization of large language models (LLMs) in zero-shot next Point-of-Interest (POI) recommendation, this paper proposes RAG-GeoAgent: a retrieval-augmented, geography-aware framework. It first retrieves historical trajectories (HTR) to capture user mobility patterns; then applies geographic distance-based re-ranking (GDR) to enhance spatial awareness; finally, it introduces an agent-like, training-free self-refinement mechanism (ALR) that enables reflective fine-tuning of LLM outputs. This work is the first to integrate retrieval-augmented generation (RAG) with geography-informed re-ranking for zero-shot POI recommendation, and it pioneers a zero-training-cost self-reflective correction paradigm. Extensive experiments on three real-world Foursquare datasets demonstrate significant improvements over conventional models and state-of-the-art LLM-based baselines, with notable gains in recommendation accuracy. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Next point-of-interest (POI) recommendation predicts a user's next destination from historical movements. Traditional models require intensive training, while LLMs offer flexible and generalizable zero-shot solutions but often generate generic or geographically irrelevant results due to missing trajectory and spatial context. To address these issues, we propose RALLM-POI, a framework that couples LLMs with retrieval-augmented generation and self-rectification. We first propose a Historical Trajectory Retriever (HTR) that retrieves relevant past trajectories to serve as contextual references, which are then reranked by a Geographical Distance Reranker (GDR) for prioritizing spatially relevant trajectories. Lastly, an Agentic LLM Rectifier (ALR) is designed to refine outputs through self-reflection. Without additional training, RALLM-POI achieves substantial accuracy gains across three real-world Foursquare datasets, outperforming both conventional and LLM-based baselines. Code is released at https://github.com/LKRcrocodile/RALLM-POI.
Problem

Research questions and friction points this paper is trying to address.

Predicting user's next destination from historical movement data
Addressing generic or geographically irrelevant LLM recommendations
Improving zero-shot POI recommendation without intensive training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieves relevant historical trajectories as context
Reranks trajectories by geographical distance priority
Refines outputs through agentic LLM self-reflection
🔎 Similar Papers
No similar papers found.