Decoding Dense Embeddings: Sparse Autoencoders for Interpreting and Discretizing Dense Retrieval

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dense Passage Retrieval (DPR) achieves strong performance but suffers from poor interpretability and lacks retrieval attribution. To address this, we propose Concept-Level Sparse Retrieval (CL-SR), the first framework to apply sparse autoencoders (SAEs) to DPR embeddings for semantic disentanglement and discretization—mapping dense vectors to human-interpretable latent concepts and generating natural-language concept explanations. CL-SR synergistically integrates dense semantic modeling with sparse indexing, forming a dense-sparse hybrid architecture that enables concept-level alignment and traceable matching. Experiments demonstrate that CL-SR maintains robustness under lexical and semantic query-document mismatches, significantly reduces index size and computational overhead, and supports human-understandable, post-hoc query-document matching attribution. By bridging dense representation learning with interpretable sparse structures, CL-SR establishes a novel paradigm for explainable dense retrieval.

Technology Category

Application Category

📝 Abstract
Despite their strong performance, Dense Passage Retrieval (DPR) models suffer from a lack of interpretability. In this work, we propose a novel interpretability framework that leverages Sparse Autoencoders (SAEs) to decompose previously uninterpretable dense embeddings from DPR models into distinct, interpretable latent concepts. We generate natural language descriptions for each latent concept, enabling human interpretations of both the dense embeddings and the query-document similarity scores of DPR models. We further introduce Concept-Level Sparse Retrieval (CL-SR), a retrieval framework that directly utilizes the extracted latent concepts as indexing units. CL-SR effectively combines the semantic expressiveness of dense embeddings with the transparency and efficiency of sparse representations. We show that CL-SR achieves high index-space and computational efficiency while maintaining robust performance across vocabulary and semantic mismatches.
Problem

Research questions and friction points this paper is trying to address.

Lack of interpretability in Dense Passage Retrieval models
Decomposing dense embeddings into interpretable latent concepts
Combining dense embeddings' expressiveness with sparse representations' efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Autoencoders decompose dense embeddings
Natural language describes latent concepts
Concept-Level Sparse Retrieval enhances efficiency
🔎 Similar Papers
No similar papers found.