🤖 AI Summary
This work addresses the challenge of fine-grained preference transfer for single-domain users in cross-domain recommendation, where limited overlapping user behaviors hinder effective knowledge transfer. To overcome this, the study proposes a novel framework that integrates large language models (LLMs) with conditional diffusion models. Specifically, LLMs are leveraged to perform language-based reasoning and generate pseudo-interactions in the target domain, while a conditional diffusion architecture accurately models target-user representations conditioned on source-domain behavior. A multi-path supervision mechanism is introduced to distinguish between genuine and pseudo-interaction pathways, thereby mitigating semantic noise. Evaluated on cross-domain sequential recommendation tasks, the proposed method significantly outperforms state-of-the-art baselines and effectively enhances recommendation performance for users present in only one domain.
📝 Abstract
Cross-domain Recommendation (CDR) exploits multi-domain correlations to alleviate data sparsity. As a core task within this field, inter-domain recommendation focuses on predicting preferences for users who interact in a source domain but lack behavioral records in a target domain. Existing approaches predominantly rely on overlapping users as anchors for knowledge transfer. In real-world scenarios, overlapping users are often scarce, leaving the vast majority of users with only single-domain interactions. For these users, the absence of explicit alignment signals makes fine-grained preference transfer intrinsically difficult. To address this challenge, this paper proposes Language-Guided Conditional Diffusion for CDR (LGCD), a novel framework that integrates Large Language Models (LLMs) and diffusion models for inter-domain sequential recommendation. Specifically, we leverage LLM reasoning to bridge the domain gap by inferring potential target preferences for single-domain users and mapping them to real items, thereby constructing pseudo-overlapping data. We distinguish between real and pseudo-interaction pathways and introduce additional supervision constraints to mitigate the semantic noise brought by pseudo-interaction. Furthermore, we design a conditional diffusion architecture to precisely guide the generation of target user representations based on source-domain patterns. Extensive experiments demonstrate that LGCD significantly outperforms state-of-the-art methods in inter-domain recommendation tasks.