Efficient Large-Scale Cross-Domain Sequential Recommendation with Dynamic State Representations

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational inefficiency of autoregressive models in multi-domain recommendation—caused by global attention over heterogeneous item sequences—this paper proposes a decoupled cross-domain sequential modeling framework. The method retains the autoregressive generation paradigm while eliminating costly full-sequence softmax computation. Its core contributions are: (1) Transition-Aware Position Encoding (TAPE), which explicitly captures cross-domain transition patterns to guide attention toward domain-relevant items; and (2) Dynamic Domain-State Representation (DDSR), a lightweight caching mechanism that replaces global attention with efficient, domain-specific state propagation. Experiments on multiple cross-domain retrieval benchmarks demonstrate consistent improvements: Recall@10 increases by 3.2–5.8%, and inference latency decreases by 37–49%. The framework achieves both high accuracy and strong scalability, making it suitable for large-scale, real-time multi-domain recommendation systems.

Technology Category

Application Category

📝 Abstract
Recently, autoregressive recommendation models (ARMs), such as Meta's HSTU model, have emerged as a major breakthrough over traditional Deep Learning Recommendation Models (DLRMs), exhibiting the highly sought-after scaling law behaviour. However, when applied to multi-domain scenarios, the transformer architecture's attention maps become a computational bottleneck, as they attend to all items across every domain. To tackle this challenge, systems must efficiently balance inter and intra-domain knowledge transfer. In this work, we introduce a novel approach for scalable multi-domain recommendation systems by replacing full inter-domain attention with two innovative mechanisms: 1) Transition-Aware Positional Embeddings (TAPE): We propose novel positional embeddings that account for domain-transition specific information. This allows attention to be focused solely on intra-domain items, effectively reducing the unnecessary computational cost associated with attending to irrelevant domains. 2) Dynamic Domain State Representation (DDSR): We introduce a dynamic state representation for each domain, which is stored and accessed during subsequent token predictions. This enables the efficient transfer of relevant domain information without relying on full attention maps. Our method offers a scalable solution to the challenges posed by large-scale, multi-domain recommendation systems and demonstrates significant improvements in retrieval tasks by separately modelling and combining inter- and intra-domain representations.
Problem

Research questions and friction points this paper is trying to address.

Addresses computational bottleneck in cross-domain recommendation attention maps
Balances inter- and intra-domain knowledge transfer efficiently
Enables scalable multi-domain recommendation without full attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transition-Aware Positional Embeddings for domain transitions
Dynamic Domain State Representation for information transfer
Separate modeling of inter- and intra-domain representations
🔎 Similar Papers
No similar papers found.
M
Manuel V. Loureiro
Huawei Ireland Research Centre, Dublin, Ireland
S
Steven Derby
Huawei Ireland Research Centre, Dublin, Ireland
A
Aleksei Medvedev
Huawei Ireland Research Centre, Dublin, Ireland
A
Alejandro Ariza-Casabona
Huawei Ireland Research Centre, Dublin, Ireland
Gonzalo Fiz Pontiveros
Gonzalo Fiz Pontiveros
Huawei
Discrete MathematicsCombinatoricsProbabilityGraph TheoryRamsey Theory
Tri Kurniawan Wijaya
Tri Kurniawan Wijaya
Huawei Research
Recommender SystemsDeep LearningDemand ResponseSmart Grid