🤖 AI Summary
Existing parametric RAG (PRAG) methods synthesize question-answer pairs per document and fine-tune LLMs to produce LoRA adapters, resulting in high inference latency, poor generalization, and misalignment with standard RAG in terms of hidden-state representations and document structure. To address these issues, we propose DistilledPRAG: a method that directly encodes document knowledge into LoRA parameters via hidden-state distillation—eliminating the need for explicit document uploads or synthetic QA-based training. We further introduce multi-document masked encoding and a parameter generator to enable cross-document reasoning and out-of-distribution (OOD) generalization. Crucially, DistilledPRAG preserves alignment with standard RAG while achieving efficient parametrization. Experiments on four QA benchmarks demonstrate that our approach surpasses baseline methods in accuracy and significantly improves OOD generalization performance.
📝 Abstract
The current RAG system requires uploading plaintext documents to the cloud, risking private data leakage. Parametric RAG (PRAG) addresses this by encoding documents as LoRA within LLMs, enabling reasoning without exposing raw content. However, it still faces two issues: (1) PRAG demands synthesizing QA pairs and fine-tuning LLM for each individual document to create its corresponding LoRA, leading to unacceptable inference latency. (2) The performance of PRAG relies solely on synthetic QA data, lacking internal alignment with standard RAG, resulting in poor generalization on out-of-distribution(OOD) inputs. Therefore, achieving high-efficiency parameterization while maintaining RAG-level performance remains a critical challenge for privacy-preserving reasoning. In this paper, we propose DistilledPRAG, a generalizable knowledge-distilled parametric RAG model aligned with standard RAG in document structure and parameter activation. We first synthesize QA pairs from single and multi-documents to enhance cross-document reasoning. Then, we mask the plaintext documents with a special token and translate them to LoRA via a parameter generator, maintaining the standard RAG document structure. Finally, guided by synthetic QA data, we train the parameter generator to match standard RAG's hidden states and output logits, enabling RAG-style reasoning without original documents. Experiments on four QA datasets show that DistilledPRAG outperforms baselines in accuracy and generalizes well on OOD data.