🤖 AI Summary
Existing studies rely on zero-shot large language model (LLM) generation of natural language user profiles, but produce low-quality outputs that hinder both recommendation performance and interpretability. To address this, we propose LangPTune—the first end-to-end LLM fine-tuning framework explicitly designed for recommendation tasks, enabling joint optimization of natural language profile generation and recommendation objectives. Methodologically, LangPTune employs a recommendation-aligned loss function to guide fine-tuning and integrates GPT-4–assisted evaluation with human expert validation. Experiments across multiple benchmarks demonstrate that LangPTune significantly outperforms zero-shot baselines; its recommendation accuracy matches state-of-the-art embedding-based methods, while simultaneously generating highly readable, credible, and interpretable natural language profiles. Thus, LangPTune achieves a principled balance between predictive performance and model transparency.
📝 Abstract
There is a growing interest in natural language-based user profiles for recommender systems, which aims to enhance transparency and scrutability compared with embedding-based methods. Existing studies primarily generate these profiles using zero-shot inference from large language models (LLMs), but their quality remains insufficient, leading to suboptimal recommendation performance. In this paper, we introduce LangPTune, the first end-to-end training framework to optimize LLM-generated user profiles. Our method significantly outperforms zero-shot approaches by explicitly training the LLM for the recommendation objective. Through extensive evaluations across diverse training configurations and benchmarks, we demonstrate that LangPTune not only surpasses zero-shot baselines but can also matches the performance of state-of-the-art embedding-based methods. Finally, we investigate whether the training procedure preserves the interpretability of these profiles compared to zero-shot inference through both GPT-4 simulations and crowdworker user studies. Implementation of LangPTune can be found at https://github.com/ZhaolinGao/LangPTune.