Adaptive Semantic Prompt Caching with VectorQ

๐Ÿ“… 2025-02-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing semantic prompt caching systems employ static similarity thresholds, which fail to accommodate the inherent complexity and uncertainty across diverse embedding spacesโ€”leading to low hit rates and high misapplication rates. To address this, we propose the Embedding-Aware Adaptive Threshold Mechanism (EATM), the first approach to dynamically learn personalized threshold regions for each query embedding, thereby abandoning the restrictive assumption of a uniform global threshold. EATM models semantic similarity via vector embeddings and jointly estimates differentiable, learnable threshold boundaries. We establish a multi-dimensional evaluation framework across four heterogeneous datasets. Experiments demonstrate that EATM substantially improves caching efficiency and reliability: it achieves up to a 12ร— increase in cache hit rate and reduces error rates by up to 92%. Overall, EATM provides a more robust and generalizable decision-making paradigm for LLM inference caching.

Technology Category

Application Category

๐Ÿ“ Abstract
Semantic prompt caches reduce the latency and cost of large language model (LLM) inference by reusing cached LLM-generated responses for semantically similar prompts. Vector similarity metrics assign a numerical score to quantify the similarity between an embedded prompt and its nearest neighbor in the cache. Existing systems rely on a static threshold to classify whether the similarity score is sufficiently high to result in a cache hit. We show that this one-size-fits-all threshold is insufficient across different prompts. We propose VectorQ, a framework to learn embedding-specific threshold regions that adapt to the complexity and uncertainty of an embedding. Through evaluations on a combination of four diverse datasets, we show that VectorQ consistently outperforms state-of-the-art systems across all static thresholds, achieving up to 12x increases in cache hit rate and error rate reductions up to 92%.
Problem

Research questions and friction points this paper is trying to address.

Optimize semantic prompt caching efficiency
Adapt thresholds for embedding similarity
Enhance LLM inference performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive threshold learning
Vector similarity metrics
Semantic prompt caching
๐Ÿ”Ž Similar Papers
No similar papers found.