Spectrum Projection Score: Aligning Retrieved Summaries with Reader Models in Retrieval-Augmented Generation

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In retrieval-augmented generation (RAG), existing evaluation methods struggle to disentangle retrieval quality from LLM prompt sensitivity, leading to confounded performance assessment. To address this, we propose the Spectrum Projection Score (SPS)—a supervision-free, lightweight semantic alignment metric that quantifies a retrieved summary’s genuine contribution to generation by projecting it onto the principal subspace of the LLM’s hidden-layer representations. Leveraging SPS, we design xCompress, a runtime framework that dynamically filters, ranks, and compresses candidate summaries. This is the first approach to interpret retrieval-generation interaction through subspace geometry, enabling interpretable, decoupled evaluation of retrieval efficacy and real-time optimization. Evaluated across five QA benchmarks and four open-source LLMs, xCompress significantly improves generation quality while enhancing RAG transparency and controllability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown improved generation performance through retrieval-augmented generation (RAG) following the retriever-reader paradigm, which supplements model inputs with externally retrieved knowledge. However, prior work often evaluates RAG holistically, assessing the retriever and reader jointly, making it difficult to isolate the true contribution of retrieval, particularly given the prompt sensitivity of LLMs used as readers. We introduce Spectrum Projection Score (SPS), a lightweight, supervision-free metric that allows the reader to gauge the semantic alignment of a retrieved summary with its hidden representation by comparing the area formed by generated tokens from the summary, and the principal directions of subspace in the reader and to measure the relevance. Building on SPS we present xCompress, an inference time controller framework that dynamically samples, ranks, and compresses retrieval summary candidates. Extensive experiments on five QA benchmarks with four open source LLMs show that SPS not only enhances performance across a range of tasks but also provides a principled perspective on the interaction between retrieval and generation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating semantic alignment of retrieved summaries with reader models
Measuring relevance of retrieved summaries without supervision
Improving retrieval-augmented generation performance dynamically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight metric SPS for semantic alignment
Dynamic framework xCompress for summary handling
Enhances performance across multiple QA benchmarks
🔎 Similar Papers
No similar papers found.