LLM4Rerank: LLM-based Auto-Reranking Framework for Recommendations

📅 2024-06-18
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Existing re-ranking methods in recommender systems face challenges in jointly optimizing multiple objectives—accuracy, diversity, and fairness—while suffering from limited scalability and insufficient personalization capability. To address these issues, this paper proposes the first large language model (LLM)-based automatic re-ranking framework. Our approach innovatively integrates chain-of-thought (CoT) reasoning with fully connected graph-structured modeling, enabling configurable unified representation of multi-criteria objectives and dynamic weight adjustment via prompt engineering. Extensive experiments on three public benchmark datasets demonstrate that our method significantly outperforms state-of-the-art approaches across comprehensive metrics encompassing accuracy, diversity, and fairness. Notably, it achieves, for the first time, efficient joint optimization of all three objectives while exhibiting strong generalization and adaptability to diverse user preferences and item distributions.

Technology Category

Application Category

📝 Abstract
Reranking is a critical component in recommender systems, playing an essential role in refining the output of recommendation algorithms. Traditional reranking models have focused predominantly on accuracy, but modern applications demand consideration of additional criteria such as diversity and fairness. Existing reranking approaches often fail to harmonize these diverse criteria effectively at the model level. Moreover, these models frequently encounter challenges with scalability and personalization due to their complexity and the varying significance of different reranking criteria in diverse scenarios. In response, we introduce a comprehensive reranking framework enhanced by LLM, designed to seamlessly integrate various reranking criteria while maintaining scalability and facilitating personalized recommendations. This framework employs a fully connected graph structure, allowing the LLM to simultaneously consider multiple aspects such as accuracy, diversity, and fairness through a coherent Chain-of-Thought (CoT) process. A customizable input mechanism is also integrated, enabling the tuning of the language model's focus to meet specific reranking needs. We validate our approach using three popular public datasets, where our framework demonstrates superior performance over existing state-of-the-art reranking models in balancing multiple criteria.
Problem

Research questions and friction points this paper is trying to address.

Recommendation Systems
Personalization
Scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM4Rerank
Large Language Model
Recommendation System Optimization
🔎 Similar Papers
No similar papers found.
Jingtong Gao
Jingtong Gao
PhD, City University of Hong Kong
recommender systemdeep learning
B
Bo Chen
Huawei Noah’s Ark Lab
Weiwen Liu
Weiwen Liu
Associate Professor, Shanghai Jiao Tong University
large language modelsAI agentsrecommender systems
X
Xiangyang Li
Huawei Noah’s Ark Lab
Y
Yichao Wang
Huawei Noah’s Ark Lab
W
Wanyu Wang
City University of Hong Kong
Huifeng Guo
Huifeng Guo
Huawei, Harbin Institute of Technology
Recommender SystemDeep LearningData Mining.
R
Ruiming Tang
Huawei Noah’s Ark Lab
X
Xiangyu Zhao
City University of Hong Kong