🤖 AI Summary
Existing re-ranking methods in recommender systems face challenges in jointly optimizing multiple objectives—accuracy, diversity, and fairness—while suffering from limited scalability and insufficient personalization capability. To address these issues, this paper proposes the first large language model (LLM)-based automatic re-ranking framework. Our approach innovatively integrates chain-of-thought (CoT) reasoning with fully connected graph-structured modeling, enabling configurable unified representation of multi-criteria objectives and dynamic weight adjustment via prompt engineering. Extensive experiments on three public benchmark datasets demonstrate that our method significantly outperforms state-of-the-art approaches across comprehensive metrics encompassing accuracy, diversity, and fairness. Notably, it achieves, for the first time, efficient joint optimization of all three objectives while exhibiting strong generalization and adaptability to diverse user preferences and item distributions.
📝 Abstract
Reranking is a critical component in recommender systems, playing an essential role in refining the output of recommendation algorithms. Traditional reranking models have focused predominantly on accuracy, but modern applications demand consideration of additional criteria such as diversity and fairness. Existing reranking approaches often fail to harmonize these diverse criteria effectively at the model level. Moreover, these models frequently encounter challenges with scalability and personalization due to their complexity and the varying significance of different reranking criteria in diverse scenarios. In response, we introduce a comprehensive reranking framework enhanced by LLM, designed to seamlessly integrate various reranking criteria while maintaining scalability and facilitating personalized recommendations. This framework employs a fully connected graph structure, allowing the LLM to simultaneously consider multiple aspects such as accuracy, diversity, and fairness through a coherent Chain-of-Thought (CoT) process. A customizable input mechanism is also integrated, enabling the tuning of the language model's focus to meet specific reranking needs. We validate our approach using three popular public datasets, where our framework demonstrates superior performance over existing state-of-the-art reranking models in balancing multiple criteria.