CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion

πŸ“… 2024-05-26
πŸ›οΈ Proceedings of the Twentieth European Conference on Computer Systems
πŸ“ˆ Citations: 7
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address high prefill latency in RAG caused by multi-chunk context, this work proposes a cross-prefix KV cache fusion mechanismβ€”the first to enable efficient KV cache reuse for arbitrary-position text chunks. Methodologically, it introduces a selective token recomputation strategy coordinated with the retrieval pipeline, enabling low-overhead deployment of caches on slow, high-capacity storage (e.g., NVMe SSDs); it further incorporates a RAG-context-aware cache management policy. Experiments across three open-source LLMs and four RAG benchmarks demonstrate 2.2–3.3Γ— reduction in time-to-first-token and 2.8–5.0Γ— throughput improvement, while preserving generation quality identical to full prefill. This is the first system-level optimization that supports non-prefix chunk KV reuse, simultaneously ensuring deployment flexibility and generation fidelity.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) often incorporate multiple text chunks in their inputs to provide the necessary contexts. To speed up the prefill of the long LLM inputs, one can pre-compute the KV cache of a text and re-use the KV cache when the context is reused as the prefix of another LLM input. However, the reused text chunks are not always the input prefix, which makes precomputed KV caches not directly usable since they ignore the text's cross-attention with the preceding texts. Thus, the benefits of reusing KV caches remain largely unrealized. This paper tackles just one challenge: when an LLM input contains multiple text chunks, how to quickly combine their precomputed KV caches in order to achieve the same generation quality as the expensive full prefill (i.e., without reusing KV cache)? This challenge naturally arises in retrieval-augmented generation (RAG) where the input is supplemented with multiple retrieved texts as the context. We present CacheBlend, a scheme that reuses the precomputed KV caches, regardless prefix or not, and selectively recomputes the KV values of a small subset of tokens to partially update each reused KV cache. In the meantime, the small extra delay for recomputing some tokens can be pipelined with the retrieval of KV caches within the same job, allowing CacheBlend to store KV caches in slower devices with more storage capacity while retrieving them without increasing the inference delay. By comparing CacheBlend with the state-of-the-art KV cache reusing schemes on three open-source LLMs of various sizes and four popular benchmark datasets of different tasks, we show that CacheBlend reduces time-to-first-token (TTFT) by 2.2-3.3x and increases the inference throughput by 2.8-5x from full KV recompute without compromising generation quality. The code is available at https://github.com/LMCache/LMCache.
Problem

Research questions and friction points this paper is trying to address.

Efficiently combine precomputed KV caches for multi-chunk LLM inputs
Maintain generation quality while reusing non-prefix KV caches in RAG
Minimize recomputation overhead by selectively updating reused KV caches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reuses precomputed KV caches non-prefix
Selectively recomputes small token subsets
Pipelines recomputation with KV cache retrieval
πŸ”Ž Similar Papers
No similar papers found.