Privacy-Preserving Reasoning with Knowledge-Distilled Parametric Retrieval Augmented Generation

📅 2025-08-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing parametric RAG (PRAG) methods synthesize question-answer pairs per document and fine-tune LLMs to produce LoRA adapters, resulting in high inference latency, poor generalization, and misalignment with standard RAG in terms of hidden-state representations and document structure. To address these issues, we propose DistilledPRAG: a method that directly encodes document knowledge into LoRA parameters via hidden-state distillation—eliminating the need for explicit document uploads or synthetic QA-based training. We further introduce multi-document masked encoding and a parameter generator to enable cross-document reasoning and out-of-distribution (OOD) generalization. Crucially, DistilledPRAG preserves alignment with standard RAG while achieving efficient parametrization. Experiments on four QA benchmarks demonstrate that our approach surpasses baseline methods in accuracy and significantly improves OOD generalization performance.

Technology Category

Application Category

📝 Abstract
The current RAG system requires uploading plaintext documents to the cloud, risking private data leakage. Parametric RAG (PRAG) addresses this by encoding documents as LoRA within LLMs, enabling reasoning without exposing raw content. However, it still faces two issues: (1) PRAG demands synthesizing QA pairs and fine-tuning LLM for each individual document to create its corresponding LoRA, leading to unacceptable inference latency. (2) The performance of PRAG relies solely on synthetic QA data, lacking internal alignment with standard RAG, resulting in poor generalization on out-of-distribution(OOD) inputs. Therefore, achieving high-efficiency parameterization while maintaining RAG-level performance remains a critical challenge for privacy-preserving reasoning. In this paper, we propose DistilledPRAG, a generalizable knowledge-distilled parametric RAG model aligned with standard RAG in document structure and parameter activation. We first synthesize QA pairs from single and multi-documents to enhance cross-document reasoning. Then, we mask the plaintext documents with a special token and translate them to LoRA via a parameter generator, maintaining the standard RAG document structure. Finally, guided by synthetic QA data, we train the parameter generator to match standard RAG's hidden states and output logits, enabling RAG-style reasoning without original documents. Experiments on four QA datasets show that DistilledPRAG outperforms baselines in accuracy and generalizes well on OOD data.
Problem

Research questions and friction points this paper is trying to address.

Addresses private data leakage in cloud-based RAG systems
Reduces high inference latency in parametric RAG approaches
Improves generalization on out-of-distribution inputs for privacy-preserving reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge distillation aligns PRAG with standard RAG performance
Parameter generator converts masked documents to LoRA adapters
Synthetic multi-document QA enhances cross-document reasoning capability
🔎 Similar Papers
No similar papers found.
Jinwen Chen
Jinwen Chen
University of Electronic Science and Technology of China
spatial crowdsourcing
Hainan Zhang
Hainan Zhang
Beihang University
Dialogue GenerationText GenerationFederated LearningNatural Language Processing
Liang Pang
Liang Pang
Associate Professor, Institute of Computing Technology, Chinese Academy of Sciences
Large Language ModelSemantic MatchingQuestion AnsweringText MatchingText Generation
Y
Yongxin Tong
Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, School of Artificial Intelligence, Beihang University
H
Haibo Zhou
Meituan
Y
Yuan Zhan
Meituan
W
Wei Lin
Meituan
Z
Zhiming Zheng
Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, School of Artificial Intelligence, Beihang University