๐ค AI Summary
Existing LLM-based recommendation methods predominantly rely on static prompts, failing to capture the dynamic evolution of user preferences and the complexity of interactive behavior. To address this, we propose MADRecโa Multi-dimensional Adaptive Recommendation agent. First, it unsupervisedly extracts fine-grained aspect-level features from user reviews to construct structured, multi-dimensional userโitem profiles. Second, it introduces a Self-Feedback mechanism that dynamically switches reasoning strategies when target items are absent. Third, it employs Re-Ranking to enhance input density and jointly generates interpretable recommendations. MADRec is end-to-end integrated with LLMs and supports direct sequential recommendation. Extensive experiments across multiple domains demonstrate that MADRec significantly outperforms both traditional and LLM-based baselines, achieving state-of-the-art performance in both recommendation accuracy and human-evaluated explanation quality. It offers strong interpretability and environment-aware adaptability.
๐ Abstract
Recent attempts to integrate large language models (LLMs) into recommender systems have gained momentum, but most remain limited to simple text generation or static prompt-based inference, failing to capture the complexity of user preferences and real-world interactions. This study proposes the Multi-Aspect Driven LLM Agent MADRec, an autonomous LLM-based recommender that constructs user and item profiles by unsupervised extraction of multi-aspect information from reviews and performs direct recommendation, sequential recommendation, and explanation generation. MADRec generates structured profiles via aspect-category-based summarization and applies Re-Ranking to construct high-density inputs. When the ground-truth item is missing from the output, the Self-Feedback mechanism dynamically adjusts the inference criteria. Experiments across multiple domains show that MADRec outperforms traditional and LLM-based baselines in both precision and explainability, with human evaluation further confirming the persuasiveness of the generated explanations.