EAGER-LLM: Enhancing Large Language Models as Recommenders through Exogenous Behavior-Semantic Integration

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from inefficient collaborative learning, weak result relevance, and difficulty in fusing multi-source features in recommendation systems due to misalignment between pre-trained linguistic semantics and collaborative semantics. Method: We propose a non-intrusive behavior–semantics fusion framework that introduces (i) a dual-source knowledge-enriched item index, (ii) a multi-scale alignment reconstruction task, and (iii) an annealing adapter—enabling dual-stream (behavioral and semantic) encoding and joint contrastive-reconstructive pre-training on decoder-only LLMs without modifying the backbone. Contribution/Results: The method achieves parameter efficiency and end-to-end collaborative optimization. On three public recommendation benchmarks, it improves Recall@10 by 12.7% and NDCG@10 by 9.3% over state-of-the-art LLM-based recommenders, demonstrating the effectiveness and generalizability of semantic–collaborative co-modeling.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly leveraged as foundational backbones in the development of advanced recommender systems, offering enhanced capabilities through their extensive knowledge and reasoning. Existing llm-based recommender systems (RSs) often face challenges due to the significant differences between the linguistic semantics of pre-trained LLMs and the collaborative semantics essential for RSs. These systems use pre-trained linguistic semantics but learn collaborative semantics from scratch via the llm-Backbone. However, LLMs are not designed for recommendations, leading to inefficient collaborative learning, weak result correlations, and poor integration of traditional RS features. To address these challenges, we propose EAGER-LLM, a decoder-only llm-based generative recommendation framework that integrates endogenous and exogenous behavioral and semantic information in a non-intrusive manner. Specifically, we propose 1)dual-source knowledge-rich item indices that integrates indexing sequences for exogenous signals, enabling efficient link-wide processing; 2)non-invasive multiscale alignment reconstruction tasks guide the model toward a deeper understanding of both collaborative and semantic signals; 3)an annealing adapter designed to finely balance the model's recommendation performance with its comprehension capabilities. We demonstrate EAGER-LLM's effectiveness through rigorous testing on three public benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs for recommender systems
Integrate behavioral and semantic information
Improve collaborative and semantic understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-source knowledge-rich item indices
Non-invasive multiscale alignment tasks
Annealing adapter for performance balance
🔎 Similar Papers
No similar papers found.
Minjie Hong
Minjie Hong
Zhejiang University
Multi-modal LearningLLMReinforcement learningGenerative RetrievalRecommendation
Y
Yan Xia
Zhejiang University, Hangzhou, Zhejiang, China
Z
Zehan Wang
Zhejiang University, Hangzhou, Zhejiang, China
J
Jieming Zhu
Huawei Noah’s Ark Lab, Shenzhen, Guangdong, China
Y
Ye Wang
Zhejiang University, Hangzhou, Zhejiang, China
S
Sihang Cai
Zhejiang University, Hangzhou, Zhejiang, China
X
Xiaoda Yang
Zhejiang University, Hangzhou, Zhejiang, China
Q
Quanyu Dai
Huawei Noah’s Ark Lab, Shenzhen, Guangdong, China
Zhenhua Dong
Zhenhua Dong
Noah's ark lab, Huawei Technologies Co., Ltd.
Recommender systemcausal inferencecountrfactual learningtrustworthy AImachine learning
Z
Zhimeng Zhang
Zhejiang University, Hangzhou, Zhejiang, China
Zhou Zhao
Zhou Zhao
Zhejiang University
Machine LearningData MiningMultimedia Computing