Exploring Approaches for Detecting Memorization of Recommender System Data in Large Language Models

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the risk of training data memorization and leakage in large language models (LLMs) when applied to recommender systems, using datasets such as MovieLens-1M. Existing detection approaches rely on manual prompting and lack systematicity. To overcome this limitation, the study introduces automated prompt engineering (APE) alongside unsupervised probing methods—specifically CCS and Cluster-Norm—into the evaluation of memorization in recommendation data. It formulates prompt discovery as a meta-learning process and systematically assesses the efficacy of jailbreaking prompts, internal activation analysis, and APE. Experimental results demonstrate that APE achieves the strongest performance in extracting item-level information; CCS effectively distinguishes real from fictional movie titles but struggles with numerical data; and jailbreaking prompts yield inconsistent results. The findings confirm that automatically optimized prompts represent the most effective strategy for extracting memorized data from LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly applied in recommendation scenarios due to their strong natural language understanding and generation capabilities. However, they are trained on vast corpora whose contents are not publicly disclosed, raising concerns about data leakage. Recent work has shown that the MovieLens-1M dataset is memorized by both the LLaMA and OpenAI model families, but the extraction of such memorized data has so far relied exclusively on manual prompt engineering. In this paper, we pose three main questions: Is it possible to enhance manual prompting? Can LLM memorization be detected through methods beyond manual prompting? And can the detection of data leakage be automated? To address these questions, we evaluate three approaches: (i) jailbreak prompt engineering; (ii) unsupervised latent knowledge discovery, probing internal activations via Contrast-Consistent Search (CCS) and Cluster-Norm; and (iii) Automatic Prompt Engineering (APE), which frames prompt discovery as a meta-learning process that iteratively refines candidate instructions. Experiments on MovieLens-1M using LLaMA models show that jailbreak prompting does not improve the retrieval of memorized items and remains inconsistent; CCS reliably distinguishes genuine from fabricated movie titles but fails on numerical user and rating data; and APE retrieves item-level information with moderate success yet struggles to recover numerical interactions. These findings suggest that automatically optimizing prompts is the most promising strategy for extracting memorized samples.
Problem

Research questions and friction points this paper is trying to address.

memorization
data leakage
large language models
recommender systems
automatic detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatic Prompt Engineering
Memorization Detection
Contrast-Consistent Search
Latent Knowledge Discovery
Data Leakage
🔎 Similar Papers
No similar papers found.
A
Antonio Colacicco
Politecnico di Bari, Bari, Italy
V
Vito Guida
Politecnico di Bari, Bari, Italy
Dario Di Palma
Dario Di Palma
Ph.D. Student at Politecnico di Bari
Large Language ModelsRecommender SystemsInterpretabilityMulti-Objective Evaluation
F
F. Narducci
Politecnico di Bari, Bari, Italy
T
T. D. Noia
Politecnico di Bari, Bari, Italy