LoRec: Large Language Model for Robust Sequential Recommendation against Poisoning Attacks

๐Ÿ“… 2024-01-31
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 5
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the vulnerability of sequential recommendation systems to unknown poisoning attacks in open-world settings, this paper proposes LoRec, a robust LLM-based recommendation framework. Methodologically, LoRec introduces LLM4Decโ€”the first fraud detection paradigm leveraging large language modelsโ€”and designs an LLM-enhanced Calibrator with user-level reweighting (LCT), which generalizes limited prior knowledge into dynamic identification and suppression of open-world fraudulent behaviors. Unlike conventional approaches, LoRec makes no assumptions about specific attack types; instead, it synergistically combines LLM-driven behavioral calibration with adversarially robust training to enhance defense generalizability. Extensive experiments across multiple benchmark datasets demonstrate that LoRec achieves an average 32.7% improvement in defense performance against unseen poisoning attacks, while preserving the original recommendation accuracy without degradation.

Technology Category

Application Category

๐Ÿ“ Abstract
Sequential recommender systems stand out for their ability to capture users' dynamic interests and the patterns of item-to-item transitions. However, the inherent openness of sequential recommender systems renders them vulnerable to poisoning attacks, where fraudulent users are injected into the training data to manipulate learned patterns. Traditional defense strategies predominantly depend on predefined assumptions or rules extracted from specific known attacks, limiting their generalizability to unknown attack types. To solve the above problems, considering the rich open-world knowledge encapsulated in Large Language Models (LLMs), our research initially focuses on the capabilities of LLMs in the detection of unknown fraudulent activities within recommender systems, a strategy we denote as LLM4Dec. Empirical evaluations demonstrate the substantial capability of LLMs in identifying unknown fraudsters, leveraging their expansive, open-world knowledge. Building upon this, we propose the integration of LLMs into defense strategies to extend their effectiveness beyond the confines of known attacks. We propose LoRec, an advanced framework that employs LLM-Enhanced Calibration to strengthen the robustness of sequential recommender systems against poisoning attacks. LoRec integrates an LLM-enhanced CalibraTor (LCT) that refines the training process of sequential recommender systems with knowledge derived from LLMs, applying a user-wise reweighting to diminish the impact of fraudsters injected by attacks. By incorporating LLMs' open-world knowledge, the LCT effectively converts the limited, specific priors or rules into a more general pattern of fraudsters, offering improved defenses against poisoning attacks. Our comprehensive experiments validate that LoRec, as a general framework, significantly strengthens the robustness of sequential recommender systems.
Problem

Research questions and friction points this paper is trying to address.

Detects unknown fraudulent activities in recommender systems using LLMs.
Enhances robustness of sequential recommender systems against poisoning attacks.
Integrates LLM-enhanced calibration to refine training and reduce fraudster impact.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs detect unknown fraud in recommender systems
LoRec integrates LLM-enhanced calibration for robustness
User-wise reweighting reduces impact of poisoning attacks
๐Ÿ”Ž Similar Papers
No similar papers found.
Kaike Zhang
Kaike Zhang
Institute of Computing Technology, Chinese Academy of Sciences
Trustworthy Graph Data Mining & Representation LearningRobust Recommender System
Q
Qi Cao
CAS Key Laboratory of AI Safety & Security, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Yunfan Wu
Yunfan Wu
Institute of Computing Technology, Chinese Academy of Sciences
Recommender SystemCollaborative FilteringAdversary Attack
F
Fei Sun
CAS Key Laboratory of AI Safety & Security, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
H
Huawei Shen
CAS Key Laboratory of AI Safety & Security, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Xueqi Cheng
Xueqi Cheng
Ph.D. student, Florida State University
Data miningLLMGNNComputational social science