End-to-End Personalization: Unifying Recommender Systems with Large Language Models

πŸ“… 2025-08-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address insufficient personalization and weak interpretability arising from sparse user feedback and heterogeneous item attributes, this paper proposes a hybrid recommendation framework integrating Graph Attention Networks (GATs) and Large Language Models (LLMs). The framework leverages LLMs to generate semantically rich user and item representations and models high-order collaborative relationships on the user–item bipartite graph. It employs Bayesian Personalized Ranking (BPR) loss, cosine similarity regularization, and robust negative sampling for end-to-end optimization. Additionally, an LLM-driven re-ranking module and a natural language explanation generator are introduced to jointly enhance recommendation accuracy and decision transparency. Extensive experiments on MovieLens 100K and 1M demonstrate significant improvements over state-of-the-art baselines. Ablation studies confirm the critical contributions of LLM-derived representations and semantic constraints to overall performance.

Technology Category

Application Category

πŸ“ Abstract
Recommender systems are essential for guiding users through the vast and diverse landscape of digital content by delivering personalized and relevant suggestions. However, improving both personalization and interpretability remains a challenge, particularly in scenarios involving limited user feedback or heterogeneous item attributes. In this article, we propose a novel hybrid recommendation framework that combines Graph Attention Networks (GATs) with Large Language Models (LLMs) to address these limitations. LLMs are first used to enrich user and item representations by generating semantically meaningful profiles based on metadata such as titles, genres, and overviews. These enriched embeddings serve as initial node features in a user and movie bipartite graph, which is processed using a GAT based collaborative filtering model. To enhance ranking accuracy, we introduce a hybrid loss function that combines Bayesian Personalized Ranking (BPR), cosine similarity, and robust negative sampling. Post-processing involves reranking the GAT-generated recommendations using the LLM, which also generates natural-language justifications to improve transparency. We evaluated our model on benchmark datasets, including MovieLens 100k and 1M, where it consistently outperforms strong baselines. Ablation studies confirm that LLM-based embeddings and the cosine similarity term significantly contribute to performance gains. This work demonstrates the potential of integrating LLMs to improve both the accuracy and interpretability of recommender systems.
Problem

Research questions and friction points this paper is trying to address.

Enhancing personalization and interpretability in recommender systems
Addressing limited user feedback and heterogeneous item attributes
Integrating LLMs and GATs for improved recommendation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines GATs with LLMs for recommendations
Uses LLMs to enrich user-item representations
Hybrid loss function enhances ranking accuracy
πŸ”Ž Similar Papers
No similar papers found.
D
Danial Ebrat
University of Windsor
T
Tina Aminian
University of Windsor
S
Sepideh Ahmadian
University of Windsor
Luis Rueda
Luis Rueda
Professor, School of Computer Science, University of Windsor
machine learningtranscriptomicscancer biomarkerssingle-cell RNA-seqcybersecurity