Towards Explainable Temporal User Profiling with LLMs

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of temporal discriminability and interpretability in user preference modeling for recommender systems. We propose an interpretable temporal user profiling framework that, for the first time, integrates large language model (LLM)-driven natural-language user profile generation with explicit disentanglement of short- and long-term interests. Specifically, interaction sequences are compressed into semantically rich textual summaries via LLMs, and short- and long-term representations are fused using a dual-channel attention mechanism to jointly optimize recommendation accuracy and explanation quality. Extensive experiments on multiple real-world datasets demonstrate significant improvements over state-of-the-art baselines. The generated explanations are naturally readable, semantically accurate, and strongly aligned with user behavior—achieving both high recommendation precision and enhanced user comprehensibility. This framework substantially improves system transparency and trustworthiness.

Technology Category

Application Category

📝 Abstract
Accurately modeling user preferences is vital not only for improving recommendation performance but also for enhancing transparency in recommender systems. Conventional user profiling methods, such as averaging item embeddings, often overlook the evolving, nuanced nature of user interests, particularly the interplay between short-term and long-term preferences. In this work, we leverage large language models (LLMs) to generate natural language summaries of users' interaction histories, distinguishing recent behaviors from more persistent tendencies. Our framework not only models temporal user preferences but also produces natural language profiles that can be used to explain recommendations in an interpretable manner. These textual profiles are encoded via a pre-trained model, and an attention mechanism dynamically fuses the short-term and long-term embeddings into a comprehensive user representation. Beyond boosting recommendation accuracy over multiple baselines, our approach naturally supports explainability: the interpretable text summaries and attention weights can be exposed to end users, offering insights into why specific items are suggested. Experiments on real-world datasets underscore both the performance gains and the promise of generating clearer, more transparent justifications for content-based recommendations.
Problem

Research questions and friction points this paper is trying to address.

Modeling evolving user preferences for better recommendations
Distinguishing short-term and long-term user interests
Enhancing transparency via explainable natural language profiles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverage LLMs for natural language user summaries
Dynamic fusion of short-term and long-term preferences
Pre-trained model encodes interpretable textual profiles
🔎 Similar Papers
No similar papers found.
Milad Sabouri
Milad Sabouri
DePaul University
Machine LearningReinforcement LearningRecommender Systems
M
M. Mansoury
Delft University of Technology, Delft, Netherlands
Kun Lin
Kun Lin
DePaul University
Recommender Systems
B
B. Mobasher
DePaul University, Chicago, USA