🤖 AI Summary
To address PCIe bandwidth bottlenecks in LLM serving—specifically limiting prefix cache loading and model switching—the paper introduces Multipath Memory Access (MMA), a novel memory access mechanism enabling coordinated multi-path data transfer between GPU and host memory over heterogeneous interconnects (PCIe and NVLink). MMA requires no code modification and deploys transparently via dynamic library injection. By breaking the single-path bandwidth ceiling, MMA achieves a GPU–host memory peak throughput of 245 GB/s (4.62× improvement), reduces first-token latency by 1.14–2.38×, and cuts model-switching latency by 1.12–2.48× under vLLM’s sleeping mode. This work establishes a deployable, low-level memory access paradigm for high-throughput, low-latency LLM inference services.
📝 Abstract
The limited bandwidth of PCIe has emerged as the critical bottleneck for large language model (LLM) performance, such as prefix cache fetching and model switching. Although intra-server multipath data transfer between GPU and host memory is theoretically possible, heterogeneous protocols such as PCIe and NVLink currently limit the bandwidth between host memory and GPUs to that of a single PICe link. This limitation resuals in underutilized intra-server bandwidth. To address this issue, we propose Multipath Memory Access (MMA), a scheme that, to the best of our knowledge, is the first to enalbe efficient multipath data transfer between GPU and host memory. MMA supports seamless deployment via dynamic library injection, enabling LLM applications to benefit from MMA without requiring any code modification. In our testbed, MMA significantly improves the data transfer bandwidth between the GPU and memory, achieving a peak bandwidth of 245 GB/s-representing a 4.62x speedup compared to the natice single-path bandwidth. End-to-end evaluations demonstrate that MMA reduces the time-to-first-token (TTFT) for LLM serving by 1.14x to 2.38x and decreases model-switching latency in vLLM's sleep mode by 1.12x to 2.48x.