Relative Contrastive Learning for Sequential Recommendation with Similarity-based Positive Sample Selection

📅 2024-10-21
🏛️ International Conference on Information and Knowledge Management
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contrastive learning in sequential recommendation faces two key challenges: data augmentation often distorts user intent, while supervised contrastive learning suffers from scarcity of target-matching sequences, leading to insufficient positive samples. To address these issues, we propose Relative Contrastive Learning (RCL), a novel framework featuring a dual-level positive sample selection strategy—leveraging target-matching sequences as strong positives and semantically similar sequences as weak positives. RCL introduces a weighted relative contrastive loss that jointly pulls strong positives closer while pushing weak positives farther, thereby alleviating both signal sparsity and semantic distortion. Compatible with mainstream architectures such as SASRec and BERT4Rec, RCL achieves an average performance gain of 4.88% across five public and one private benchmark datasets, significantly outperforming state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Contrastive Learning (CL) enhances the training of sequential recommendation (SR) models through informative self-supervision signals. Existing methods often rely on data augmentation strategies to create positive samples and promote representation invariance. Some strategies such as item reordering and item substitution may inadvertently alter user intent. Supervised Contrastive Learning (SCL) based methods find an alternative to augmentation-based CL methods by selecting same-target sequences (interaction sequences with the same target item) to form positive samples. However, SCL-based methods suffer from the scarcity of same-target sequences and consequently lack enough signals for contrastive learning. In this work, we propose to use similar sequences (with different target items) as additional positive samples and introduce a Relative Contrastive Learning (RCL) framework for sequential recommendation. RCL comprises a dual-tiered positive sample selection module and a relative contrastive learning module. The former module selects same-target sequences as strong positive samples and selects similar sequences as weak positive samples. The latter module employs a weighted relative contrastive loss, ensuring that each sequence is represented closer to its strong positive samples than its weak positive samples. We apply RCL on two mainstream deep learning-based SR models, and our empirical results reveal that RCL can achieve 4.88% improvement averagely than the state-of-the-art SR methods on five public datasets and one private dataset.
Problem

Research questions and friction points this paper is trying to address.

Enhance sequential recommendation via contrastive learning with similarity-based positive pairs
Address scarcity of same-target sequences in supervised contrastive learning methods
Improve recommendation accuracy by distinguishing strong and weak positive samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses similar sequences as additional positive samples
Introduces dual-tiered positive sample selection module
Employs weighted relative contrastive loss
🔎 Similar Papers
No similar papers found.