🤖 AI Summary
To address the low last-level cache (LLC) prefetching efficiency caused by irregular memory access patterns, this paper proposes a programmable and scalable software-defined LLC prefetcher. Unlike conventional hardware-based prediction mechanisms, our design fully migrates prefetching logic to the software layer, enabling lightweight API-driven specification of access patterns without requiring instruction-set extensions—thereby significantly reducing hardware complexity and improving resource utilization. We implement the prefetcher in gem5 as a hybrid architecture that cooperates with private-cache prefetchers. Experimental evaluation on the GAPBS graph traversal benchmark shows up to 1.74× speedup over the baseline system; when jointly deployed with private-cache prefetchers, it still achieves 1.40× speedup. These results demonstrate the prefetcher’s effectiveness in adapting to complex, irregular memory access patterns.
📝 Abstract
Modern high-performance architectures employ large last-level caches (LLCs). While large LLCs can reduce average memory access latency for workloads with a high degree of locality, they can also increase latency for workloads with irregular memory access patterns. Prefetchers are widely used to reduce memory latency by prefetching data into the cache hierarchy before it is accessed by the core. However, existing prediction-based prefetchers often struggle with irregular memory access patterns, which are especially prevalent in modern applications. This paper introduces the Pickle Prefetcher, a programmable and scalable LLC prefetcher designed to handle independent irregular memory access patterns effectively. Instead of relying on static heuristics or complex prediction algorithms, Pickle Prefetcher allows software to define its own prefetching strategies using a simple programming interface without expanding the instruction set architecture (ISA). By trading the logic complexity of hardware prediction for software programmability, Pickle Prefetcher can adapt to a wide range of access patterns without requiring extensive hardware resources for prediction. This allows the prefetcher to dedicate its resources to scheduling and issuing timely prefetch requests. Graph applications are an example where the memory access pattern is irregular but easily predictable by software. Through extensive evaluations of the Pickle Prefetcher on gem5 full-system simulations, we demonstrate tha Pickle Prefetcher significantly outperforms traditional prefetching techniques. Our results show that Pickle Prefetcher achieves speedups of up to 1.74x on the GAPBS breadth-first search (BFS) implementation over a baseline system. When combined with private cache prefetchers, Pickle Prefetcher provides up to a 1.40x speedup over systems using only private cache prefetchers.