RAG Security and Privacy: Formalizing the Threat Model and Attack Surface

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
RAG systems enhance LLM factuality but introduce novel privacy and security risks—including document leakage, membership inference, and retrieval-augmented data poisoning—yet lack formal threat modeling. This paper presents the first structured threat model for RAG, rigorously defining its attack surface, security boundaries, and attacker capabilities under multi-level privilege assumptions. We propose the first fine-grained threat taxonomy, formally characterizing critical attack vectors such as document-level membership inference and retrieval-augmented data poisoning. By integrating adversarial analysis with quantitative information-leakage measurement, we achieve verifiable, end-to-end security risk modeling across the RAG pipeline. Our framework establishes a theoretical foundation for RAG security research, provides a unified evaluation standard, and informs principled defense design. (149 words)

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) is an emerging approach in natural language processing that combines large language models (LLMs) with external document retrieval to produce more accurate and grounded responses. While RAG has shown strong potential in reducing hallucinations and improving factual consistency, it also introduces new privacy and security challenges that differ from those faced by traditional LLMs. Existing research has demonstrated that LLMs can leak sensitive information through training data memorization or adversarial prompts, and RAG systems inherit many of these vulnerabilities. At the same time, reliance of RAG on an external knowledge base opens new attack surfaces, including the potential for leaking information about the presence or content of retrieved documents, or for injecting malicious content to manipulate model behavior. Despite these risks, there is currently no formal framework that defines the threat landscape for RAG systems. In this paper, we address a critical gap in the literature by proposing, to the best of our knowledge, the first formal threat model for retrieval-RAG systems. We introduce a structured taxonomy of adversary types based on their access to model components and data, and we formally define key threat vectors such as document-level membership inference and data poisoning, which pose serious privacy and integrity risks in real-world deployments. By establishing formal definitions and attack models, our work lays the foundation for a more rigorous and principled understanding of privacy and security in RAG systems.
Problem

Research questions and friction points this paper is trying to address.

Formalizing the threat model for RAG systems' security and privacy risks
Addressing new attack surfaces introduced by external knowledge retrieval
Defining adversary types and threat vectors like data poisoning attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formal threat model for RAG security and privacy
Taxonomy of adversary types by access level
Defined threat vectors like membership inference
🔎 Similar Papers
No similar papers found.
A
Atousa Arzanipour
University of South Florida, Tampa, USA
R
R. Behnia
University of South Florida, Tampa, USA
Reza Ebrahimi
Reza Ebrahimi
University of South Florida
Secure and Trustworthy AIAI-enabled CybersecurityStatistical Machine Learning
K
Kaushik Dutta
University of South Florida, Tampa, USA