π€ AI Summary
To address the limited discriminative capability of sequential recommendation models caused by sparse user behavioral data, this paper proposes FENRecβa novel framework that introduces time-aware soft labels for the first time, leveraging future interaction information to generate dynamic, fine-grained supervision signals. It further designs a persistence-aware hard negative mining mechanism that adaptively constructs semantically relevant and training-robust negative samples from historical sequences. Integrating soft-label supervision with contrastive learning, FENRec employs a Transformer-based sequence encoder. This approach effectively mitigates two key limitations of conventional methods: the coarse granularity of binary labels and the diminishing efficacy of random negative sampling in later training stages. Extensive experiments on four benchmark datasets demonstrate consistent superiority over state-of-the-art methods, achieving an average improvement of 6.16% across all evaluation metrics.
π Abstract
Sequential recommendation (SR) systems predict user preferences by analyzing time-ordered interaction sequences. A common challenge for SR is data sparsity, as users typically interact with only a limited number of items. While contrastive learning has been employed in previous approaches to address the challenges, these methods often adopt binary labels, missing finer patterns and overlooking detailed information in subsequent behaviors of users. Additionally, they rely on random sampling to select negatives in contrastive learning, which may not yield sufficiently hard negatives during later training stages. In this paper, we propose Future data utilization with Enduring Negatives for contrastive learning in sequential Recommendation (FENRec). Our approach aims to leverage future data with time-dependent soft labels and generate enduring hard negatives from existing data, thereby enhancing the effectiveness in tackling data sparsity. Experiment results demonstrate our state-of-the-art performance across four benchmark datasets, with an average improvement of 6.16% across all metrics.