Strata: Hierarchical Context Caching for Long Context Language Model Serving

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address GPU memory overflow and I/O bottlenecks caused by KV caching in long-context LLM serving, this paper proposes Strata, a hierarchical caching framework. Methodologically, Strata introduces (1) GPU-accelerated I/O via CUDA kernels that aggregate non-contiguous page reads to mitigate cache fragmentation; (2) decoupled CPU–GPU memory layouts to enable efficient coordination across heterogeneous storage tiers; and (3) a cache-aware dynamic request scheduler that explicitly models load latency and overlaps I/O with computation. Implemented atop SGLang, Strata achieves up to 5× lower first-token latency compared to vLLM+LMCache and 3.75× higher throughput for long-context workloads versus TensorRT-LLM—while preserving short-context performance.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) with expanding context windows face significant performance hurdles. While caching key-value (KV) states is critical for avoiding redundant computation, the storage footprint of long-context caches quickly exceeds GPU memory capacity, forcing production systems to adopt hierarchical caching across memory hierarchies. However, transferring large cached contexts back to the GPU introduces severe performance bottlenecks: fragmented I/O from paged layouts prevents full bandwidth utilization, and existing schedulers fail to account for cache-loading delays, leaving systems loading-bound rather than compute-bound. We present Strata, a hierarchical context caching framework designed for efficient long context LLM serving. Strata introduces GPU-assisted I/O to combat KV cache fragmentation, decoupling GPU and CPU memory layouts and employs cache-aware request scheduling to balance compute with I/O latency and overlapping unavoidable stalls with complementary tasks. Built on SGLang and deployed in production, Strata achieves up to 5x lower Time-To-First-Token (TTFT) compared to vLLM + LMCache and 3.75x speedup over NVIDIA TensorRT-LLM on long-context benchmarks, without degrading short-context performance.
Problem

Research questions and friction points this paper is trying to address.

Reducing GPU memory footprint for long-context KV caching
Overcoming fragmented I/O bottlenecks in cache transfers
Mitigating cache-loading delays through optimized scheduling
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU-assisted I/O for KV cache fragmentation
Cache-aware request scheduling for latency balancing
Decoupling GPU and CPU memory layouts
🔎 Similar Papers
No similar papers found.