🤖 AI Summary
In long-context large language model (LLM) inference, KV cache access is bottlenecked by memory bandwidth, leading to high latency and low throughput. Method: This paper proposes a scalable multi-node DRAM-PIM architecture featuring hardware-software co-design: (i) a novel pipelined parallelism mechanism across PIM modules; (ii) a context-length-adaptive Direct PIM Access (DPA) controller; and (iii) an MLIR-based custom compiler and hardware simulation framework enabling dynamic memory management and scheduling optimization. Contribution/Results: Experiments demonstrate that the architecture achieves up to 8.54× and 16.0× higher throughput than multi-GPU and GPU-PIM baselines, respectively, while significantly reducing end-to-end latency. It establishes a new hardware-software co-design paradigm for efficient LLM inference deployment.
📝 Abstract
The expansion of large language models (LLMs) with hundreds of billions of parameters presents significant challenges to computational resources, particularly data movement and memory bandwidth. Long-context LLMs, which process sequences of tens of thousands of tokens, further increase the demand on the memory system as the complexity in attention layers and key-value cache sizes is proportional to the context length. Processing-in-Memory (PIM) maximizes memory bandwidth by moving compute to the data and can address the memory bandwidth challenges; however, PIM is not necessarily scalable to accelerate long-context LLM because of limited per-module memory capacity and the inflexibility of fixed-functional unit PIM architecture and static memory management. In this work, we propose LoL-PIM which is a multi-node PIM architecture that accelerates long context LLM through hardware-software co-design. In particular, we propose how pipeline parallelism can be exploited across a multi-PIM module while a direct PIM access (DPA) controller (or DMA for PIM) is proposed that enables dynamic PIM memory management and results in efficient PIM utilization across a diverse range of context length. We developed an MLIR-based compiler for LoL-PIM extending a commercial PIM-based compiler where the software modifications were implemented and evaluated, while the hardware changes were modeled in the simulator. Our evaluations demonstrate that LoL-PIM significantly improves throughput and reduces latency for long-context LLM inference, outperforming both multi-GPU and GPU-PIM systems (up to 8.54x and 16.0x speedup, respectively), thereby enabling more efficient deployment of LLMs in real-world applications.