🤖 AI Summary
This paper addresses the “delayed hit” scenario in cache systems—where cache misses incur significant retrieval latency due to waiting for backend storage—posing fundamental challenges for theoretical analysis and optimization.
Method: We introduce a two-parameter model characterizing the system: (Z) (the ratio of retrieval latency to inter-request time) and (k) (cache capacity), and derive the first tight competitive ratio bounds for this setting.
Contribution/Results: We prove that marking-based caching algorithms—including LRU—achieve an (O(Zk)) competitive ratio, establishing the first rigorous theoretical guarantee for delayed-hit caching. This result fills a longstanding gap in the theoretical understanding of latency-affected caching and provides the first provably grounded performance benchmark for cache policy design in high-latency environments such as CDNs and edge computing.
📝 Abstract
In the classical caching problem, when a requested page is not present in the cache (i.e., a"miss"), it is assumed to travel from the backing store into the cache"before"the next request arrives. However, in many real-life applications, such as content delivery networks, this assumption is unrealistic. The"delayed-hits"model for caching, introduced by Atre, Sherry, Wang, and Berger, accounts for the latency between a missed cache request and the corresponding arrival from the backing store. This theoretical model has two parameters: the"delay"$Z$, representing the ratio between the retrieval delay and the inter-request delay in an application, and the"cache size"$k$, as in classical caching. Classical caching corresponds to $Z=1$, whereas larger values of $Z$ model applications where retrieving missed requests is expensive. Despite the practical relevance of the delayed-hits model, its theoretical underpinnings are still poorly understood. We present the first tight theoretical guarantee for optimizing delayed-hits caching: The"Least Recently Used"algorithm, a natural, deterministic, online algorithm widely used in practice, is $O(Zk)$-competitive, meaning it incurs at most $O(Zk)$ times more latency than the (offline) optimal schedule. Our result extends to any so-called"marking"algorithm.