🤖 AI Summary
Large language model (LLM)-based recommender systems struggle to infer deep user preferences under sparse historical interaction data. Method: We propose a two-stage clarification question generation framework grounded in diffusion modeling: the forward process injects noise into an ideal question sequence to model uncertainty, while the reverse process learns iterative denoising to generate a funnel-shaped, semantically progressive question sequence; user profile enhancement and question-answer sequence joint modeling further enable dynamic preference refinement. Contribution/Results: This work is the first to introduce the diffusion paradigm into interactive recommendation question generation, eliminating reliance on extensive dialogue annotations required by conventional reinforcement learning or supervised fine-tuning approaches. Experiments demonstrate significant improvements in question-guided efficiency and recommendation accuracy across domains, with particularly strong performance in cold-start scenarios.
📝 Abstract
Large Language Models (LLMs) have made it possible for recommendation systems to interact with users in open-ended conversational interfaces. In order to personalize LLM responses, it is crucial to elicit user preferences, especially when there is limited user history. One way to get more information is to present clarifying questions to the user. However, generating effective sequential clarifying questions across various domains remains a challenge. To address this, we introduce a novel approach for training LLMs to ask sequential questions that reveal user preferences. Our method follows a two-stage process inspired by diffusion models. Starting from a user profile, the forward process generates clarifying questions to obtain answers and then removes those answers step by step, serving as a way to add ``noise'' to the user profile. The reverse process involves training a model to ``denoise'' the user profile by learning to ask effective clarifying questions. Our results show that our method significantly improves the LLM's proficiency in asking funnel questions and eliciting user preferences effectively.