π€ AI Summary
In privacy-sensitive applications, retrieval-augmented generation (RAG) risks exposing sensitive information due to its reliance on external knowledge sources; existing differential privacy (DP) approaches typically inject noise at the query stage, causing the privacy budget to deplete cumulatively with each query.
Method: We propose DP-SynRAGβa novel framework that extends the private prediction paradigm to synthetic text generation, constructing a reusable, differentially private knowledge base. It leverages subsampled record simulation and large language model (LLM)-based text generation under strict privacy constraints, yielding a synthetic database with one-time privacy cost and permanent usability.
Contribution/Results: Experiments demonstrate that, under a fixed privacy budget, DP-SynRAG significantly outperforms state-of-the-art private RAG systems, achieving a superior trade-off between rigorous privacy guarantees and end-to-end retrieval-generation utility.
π Abstract
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by grounding them in external knowledge. However, its application in sensitive domains is limited by privacy risks. Existing private RAG methods typically rely on query-time differential privacy (DP), which requires repeated noise injection and leads to accumulated privacy loss. To address this issue, we propose DP-SynRAG, a framework that uses LLMs to generate differentially private synthetic RAG databases. Unlike prior methods, the synthetic text can be reused once created, thereby avoiding repeated noise injection and additional privacy costs. To preserve essential information for downstream RAG tasks, DP-SynRAG extends private prediction, which instructs LLMs to generate text that mimics subsampled database records in a DP manner. Experiments show that DP-SynRAG achieves superior performanec to the state-of-the-art private RAG systems while maintaining a fixed privacy budget, offering a scalable solution for privacy-preserving RAG.