π€ AI Summary
To address insufficient personalization and weak interpretability arising from sparse user feedback and heterogeneous item attributes, this paper proposes a hybrid recommendation framework integrating Graph Attention Networks (GATs) and Large Language Models (LLMs). The framework leverages LLMs to generate semantically rich user and item representations and models high-order collaborative relationships on the userβitem bipartite graph. It employs Bayesian Personalized Ranking (BPR) loss, cosine similarity regularization, and robust negative sampling for end-to-end optimization. Additionally, an LLM-driven re-ranking module and a natural language explanation generator are introduced to jointly enhance recommendation accuracy and decision transparency. Extensive experiments on MovieLens 100K and 1M demonstrate significant improvements over state-of-the-art baselines. Ablation studies confirm the critical contributions of LLM-derived representations and semantic constraints to overall performance.
π Abstract
Recommender systems are essential for guiding users through the vast and diverse landscape of digital content by delivering personalized and relevant suggestions. However, improving both personalization and interpretability remains a challenge, particularly in scenarios involving limited user feedback or heterogeneous item attributes. In this article, we propose a novel hybrid recommendation framework that combines Graph Attention Networks (GATs) with Large Language Models (LLMs) to address these limitations. LLMs are first used to enrich user and item representations by generating semantically meaningful profiles based on metadata such as titles, genres, and overviews. These enriched embeddings serve as initial node features in a user and movie bipartite graph, which is processed using a GAT based collaborative filtering model. To enhance ranking accuracy, we introduce a hybrid loss function that combines Bayesian Personalized Ranking (BPR), cosine similarity, and robust negative sampling. Post-processing involves reranking the GAT-generated recommendations using the LLM, which also generates natural-language justifications to improve transparency. We evaluated our model on benchmark datasets, including MovieLens 100k and 1M, where it consistently outperforms strong baselines. Ablation studies confirm that LLM-based embeddings and the cosine similarity term significantly contribute to performance gains. This work demonstrates the potential of integrating LLMs to improve both the accuracy and interpretability of recommender systems.