A Zero-Shot Generalization Framework for LLM-Driven Cross-Domain Sequential Recommendation

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of large language model (LLM) generalization in zero-shot cross-domain sequential recommendation (ZCDSR) caused by semantic and behavioral distribution discrepancies across domains, this paper proposes a two-level cross-domain alignment framework. At the item level, a universality-diversity balanced loss mitigates semantic bias; at the sequence level, a behavioral pattern transfer mechanism enables dynamic target-domain inference via user sequence clustering and attention-weighted aggregation. The method requires no target-domain interaction data and avoids fine-tuning, instead integrating LLM embeddings, contrastive learning, and a novel cross-domain contrastive loss. Extensive experiments across multiple datasets and domains demonstrate average improvements of 12.7% in Recall@10 and NDCG@10, significantly enhancing robustness and generalization capability in zero-shot settings.

Technology Category

Application Category

📝 Abstract
Zero-shot cross-domain sequential recommendation (ZCDSR) enables predictions in unseen domains without the need for additional training or fine-tuning, making it particularly valuable in data-sparse environments where traditional models struggle. Recent advancements in large language models (LLMs) have greatly improved ZCDSR by leveraging rich pretrained representations to facilitate cross-domain knowledge transfer. However, a key challenge persists: domain semantic bias, which arises from variations in vocabulary and content focus across domains. This misalignment leads to inconsistencies in item embeddings and hinders generalization. To address this issue, we propose a novel framework designed to enhance LLM-based ZCDSR by improving cross-domain alignment at both the item and sequential levels. At the item level, we introduce a generalization loss that promotes inter-domain compactness by aligning embeddings of similar items across domains while maintaining intra-domain diversity to preserve unique item characteristics. This prevents embeddings from becoming overly generic while ensuring effective transferability. At the sequential level, we develop a method for transferring user behavioral patterns by clustering user sequences in the source domain and applying attention-based aggregation for target domain inference. This dynamic adaptation of user embeddings allows effective zero-shot recommendations without requiring target-domain interactions. Comprehensive experiments across multiple datasets and domains demonstrate that our framework significantly improves sequential recommendation performance in the ZCDSR setting. By mitigating domain bias and enhancing the transferability of sequential patterns, our method provides a scalable and robust approach for achieving more effective zero-shot recommendations across domains.
Problem

Research questions and friction points this paper is trying to address.

Zero-shot Cross-domain Sequence Recommendation
Language Model Adaptability
Data Sparsity Challenge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-shot Cross-domain Sequential Recommendation
Large Language Models
Behavior Pattern Transfer
🔎 Similar Papers
No similar papers found.
Y
Yunzhe Li
University of Illinois, Urbana-Champaign
Junting Wang
Junting Wang
University of Illinois, Urbana-Champaign
Graph Neural NetworkDeep LearningData miningRecommender System
H
Hari Sundaram
University of Illinois, Urbana-Champaign
Z
Zhining Liu
University of Illinois, Urbana-Champaign