🤖 AI Summary
This work addresses the lack of temporal discriminability and interpretability in user preference modeling for recommender systems. We propose an interpretable temporal user profiling framework that, for the first time, integrates large language model (LLM)-driven natural-language user profile generation with explicit disentanglement of short- and long-term interests. Specifically, interaction sequences are compressed into semantically rich textual summaries via LLMs, and short- and long-term representations are fused using a dual-channel attention mechanism to jointly optimize recommendation accuracy and explanation quality. Extensive experiments on multiple real-world datasets demonstrate significant improvements over state-of-the-art baselines. The generated explanations are naturally readable, semantically accurate, and strongly aligned with user behavior—achieving both high recommendation precision and enhanced user comprehensibility. This framework substantially improves system transparency and trustworthiness.
📝 Abstract
Accurately modeling user preferences is vital not only for improving recommendation performance but also for enhancing transparency in recommender systems. Conventional user profiling methods, such as averaging item embeddings, often overlook the evolving, nuanced nature of user interests, particularly the interplay between short-term and long-term preferences. In this work, we leverage large language models (LLMs) to generate natural language summaries of users' interaction histories, distinguishing recent behaviors from more persistent tendencies. Our framework not only models temporal user preferences but also produces natural language profiles that can be used to explain recommendations in an interpretable manner. These textual profiles are encoded via a pre-trained model, and an attention mechanism dynamically fuses the short-term and long-term embeddings into a comprehensive user representation. Beyond boosting recommendation accuracy over multiple baselines, our approach naturally supports explainability: the interpretable text summaries and attention weights can be exposed to end users, offering insights into why specific items are suggested. Experiments on real-world datasets underscore both the performance gains and the promise of generating clearer, more transparent justifications for content-based recommendations.