Enhancing Recommendation Explanations through User-Centric Refinement

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing natural language explanations in recommender systems suffer from weak factual grounding, insufficient personalization, and incoherent sentiment, failing to meet users’ practical needs. To address this, we propose a user-centric multi-turn refinement paradigm that dynamically optimizes explanations during inference. Our approach introduces an innovative “Plan–Refine” two-stage multi-agent collaborative framework, integrating hierarchical reflection mechanisms: strategic-level (goal alignment) and content-level (semantic accuracy), enabling controllable, LLM-based generation and iterative refinement. Extensive experiments on three public benchmarks demonstrate substantial improvements over current state-of-the-art methods: +18.7% in factual consistency, +22.3% in personalization, and +15.9% in sentiment coherence—collectively establishing new performance benchmarks for explainable recommendation.

Technology Category

Application Category

📝 Abstract
Generating natural language explanations for recommendations has become increasingly important in recommender systems. Traditional approaches typically treat user reviews as ground truth for explanations and focus on improving review prediction accuracy by designing various model architectures. However, due to limitations in data scale and model capability, these explanations often fail to meet key user-centric aspects such as factuality, personalization, and sentiment coherence, significantly reducing their overall helpfulness to users. In this paper, we propose a novel paradigm that refines initial explanations generated by existing explainable recommender models during the inference stage to enhance their quality in multiple aspects. Specifically, we introduce a multi-agent collaborative refinement framework based on large language models. To ensure alignment between the refinement process and user demands, we employ a plan-then-refine pattern to perform targeted modifications. To enable continuous improvements, we design a hierarchical reflection mechanism that provides feedback on the refinement process from both strategic and content perspectives. Extensive experiments on three datasets demonstrate the effectiveness of our framework.
Problem

Research questions and friction points this paper is trying to address.

Improving recommendation explanation quality
Enhancing user-centric aspects in explanations
Refining explanations using multi-agent framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent collaborative refinement
plan-then-refine pattern
hierarchical reflection mechanism
🔎 Similar Papers
No similar papers found.
J
Jingsen Zhang
Renmin University of China
Zihang Tian
Zihang Tian
Doctor at Gaoling School of AI
LLM-Based Agent
X
Xueyang Feng
Renmin University of China
X
Xu Chen
Renmin University of China