On the Effectiveness of Membership Inference in Targeted Data Extraction from Large Language Models

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are prone to memorizing training data, posing dual privacy risks: training data extraction and membership inference attacks (MIAs). This paper is the first to systematically integrate multiple MIA paradigms—including confidence-, entropy-, gradient-, shadow-model-, and LLM-as-a-judge–based approaches—into an end-to-end targeted data extraction pipeline, augmented by prompt engineering and generative sampling strategies, to empirically evaluate their effectiveness in realistic extraction scenarios. Results reveal a significant performance degradation for most MIAs under extraction conditions; only the lightweight logit-difference method maintains high accuracy. The study uncovers a critical efficacy gap of MIAs in practical privacy threats, challenging the overestimation of their capabilities derived from standard benchmarks. It provides a more operationally grounded quantitative framework and methodology for assessing LLM privacy risks.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are prone to mem- orizing training data, which poses serious privacy risks. Two of the most prominent concerns are training data extraction and Membership Inference Attacks (MIAs). Prior research has shown that these threats are interconnected: adversaries can extract training data from an LLM by querying the model to generate a large volume of text and subsequently applying MIAs to verify whether a particular data point was included in the training set. In this study, we integrate multiple MIA techniques into the data extraction pipeline to systematically benchmark their effectiveness. We then compare their performance in this integrated setting against results from conventional MIA bench- marks, allowing us to evaluate their practical utility in real-world extraction scenarios.
Problem

Research questions and friction points this paper is trying to address.

Evaluating MIA effectiveness in targeted data extraction from LLMs
Benchmarking MIA techniques in real-world data extraction scenarios
Assessing privacy risks from LLM memorization and data extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating multiple membership inference attack techniques
Systematically benchmarking effectiveness in data extraction pipeline
Comparing performance against conventional MIA benchmarks
🔎 Similar Papers
No similar papers found.
A
Ali Al Sahili
American University of Beirut, Beirut, Lebanon
Ali Chehab
Ali Chehab
Professor & Chair of ECE Department, American University of Beirut
CryptographyAI for CybersecurityAI for Medicine
R
Razane Tajeddine
American University of Beirut, Beirut, Lebanon