🤖 AI Summary
Online caching aims to minimize the cache miss rate under a finite cache capacity. Existing learning-augmented algorithms achieve 1-consistency but suffer from poor robustness; conversely, robustification methods often sacrifice consistency or incur high computational overhead. This paper proposes Guard, a lightweight robustification framework that, for the first time, achieves strict 1-consistency while improving robustness to $2H_k + 2$ (where $H_k$ is the $k$-th harmonic number), with only constant-time and constant-space overhead and no increase in the asymptotic time complexity of the base algorithm. Guard is plug-and-play compatible with diverse learning-augmented caching policies and dynamically adjusts eviction decisions based on prediction confidence. Extensive experiments across multiple real-world datasets and prediction models demonstrate that Guard attains the state-of-the-art trade-off between consistency and robustness.
📝 Abstract
The online caching problem aims to minimize cache misses when serving a sequence of requests under a limited cache size. While naive learning-augmented caching algorithms achieve ideal $1$-consistency, they lack robustness guarantees. Existing robustification methods either sacrifice $1$-consistency or introduce significant computational overhead. In this paper, we introduce extsc{Guard}, a lightweight robustification framework that enhances the robustness of a broad class of learning-augmented caching algorithms to $2H_k + 2$, while preserving their $1$-consistency. extsc{Guard} achieves the current best-known trade-off between consistency and robustness, with only $mathcal{O}(1)$ additional per-request overhead, thereby maintaining the original time complexity of the base algorithm. Extensive experiments across multiple real-world datasets and prediction models validate the effectiveness of extsc{Guard} in practice.