๐ค AI Summary
To address the challenges of cold-start item recommendation and weak modeling of co-purchase patterns in recommender systems, this paper proposes ItemRAGโthe first LLM-based recommendation framework leveraging item-level Retrieval-Augmented Generation (RAG). Methodologically, it replaces conventional user-level retrieval with a novel item-level semantic-co-purchase joint retrieval mechanism, which jointly encodes semantic similarity and frequency-weighted co-purchase signals to precisely capture item-level collaborative relationships. Furthermore, it constructs co-purchase sequences as retrieval contexts to enhance LLMsโ zero-shot recommendation capability. Empirically, ItemRAG achieves up to a 43% improvement in Hit-Ratio@1 under zero-shot settings across multiple benchmark datasets, significantly outperforming user-centric baselines. Crucially, it maintains consistent superiority in both standard and cold-start scenarios, demonstrating robust generalization without task-specific fine-tuning.
๐ Abstract
Recently, large language models (LLMs) have been widely used as recommender systems, owing to their strong reasoning capability and their effectiveness in handling cold-start items. To better adapt LLMs for recommendation, retrieval-augmented generation (RAG) has been incorporated. Most existing RAG methods are user-based, retrieving purchase patterns of users similar to the target user and providing them to the LLM. In this work, we propose ItemRAG, an item-based RAG method for LLM-based recommendation that retrieves relevant items (rather than users) from item-item co-purchase histories. ItemRAG helps LLMs capture co-purchase patterns among items, which are beneficial for recommendations. Especially, our retrieval strategy incorporates semantically similar items to better handle cold-start items and uses co-purchase frequencies to improve the relevance of the retrieved items. Through extensive experiments, we demonstrate that ItemRAG consistently (1) improves the zero-shot LLM-based recommender by up to 43% in Hit-Ratio-1 and (2) outperforms user-based RAG baselines under both standard and cold-start item recommendation settings.