MultiPath Transfer Engine: Breaking GPU and Host-Memory Bandwidth Bottlenecks in LLM Services

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address PCIe bandwidth bottlenecks in LLM serving—specifically limiting prefix cache loading and model switching—the paper introduces Multipath Memory Access (MMA), a novel memory access mechanism enabling coordinated multi-path data transfer between GPU and host memory over heterogeneous interconnects (PCIe and NVLink). MMA requires no code modification and deploys transparently via dynamic library injection. By breaking the single-path bandwidth ceiling, MMA achieves a GPU–host memory peak throughput of 245 GB/s (4.62× improvement), reduces first-token latency by 1.14–2.38×, and cuts model-switching latency by 1.12–2.48× under vLLM’s sleeping mode. This work establishes a deployable, low-level memory access paradigm for high-throughput, low-latency LLM inference services.

Technology Category

Application Category

📝 Abstract
The limited bandwidth of PCIe has emerged as the critical bottleneck for large language model (LLM) performance, such as prefix cache fetching and model switching. Although intra-server multipath data transfer between GPU and host memory is theoretically possible, heterogeneous protocols such as PCIe and NVLink currently limit the bandwidth between host memory and GPUs to that of a single PICe link. This limitation resuals in underutilized intra-server bandwidth. To address this issue, we propose Multipath Memory Access (MMA), a scheme that, to the best of our knowledge, is the first to enalbe efficient multipath data transfer between GPU and host memory. MMA supports seamless deployment via dynamic library injection, enabling LLM applications to benefit from MMA without requiring any code modification. In our testbed, MMA significantly improves the data transfer bandwidth between the GPU and memory, achieving a peak bandwidth of 245 GB/s-representing a 4.62x speedup compared to the natice single-path bandwidth. End-to-end evaluations demonstrate that MMA reduces the time-to-first-token (TTFT) for LLM serving by 1.14x to 2.38x and decreases model-switching latency in vLLM's sleep mode by 1.12x to 2.48x.
Problem

Research questions and friction points this paper is trying to address.

Addresses PCIe bandwidth bottleneck in LLM services
Enables multipath data transfer between GPU and host memory
Improves token generation and model switching latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

MMA enables multipath GPU-host memory data transfer
MMA uses dynamic library injection for seamless deployment
MMA achieves 4.62x bandwidth speedup over single-path
🔎 Similar Papers
No similar papers found.
L
Lingfeng Tang
Hunan University
Daoping Zhang
Daoping Zhang
School of Mathematical Sciences, Nankai University
Image Processing
J
Junjie Chen
Hunan University
P
Peihao Huang
Hunan University
Feng Jin
Feng Jin
Tencent
C
Chengguang Xu
Tencent
Y
Yuxin Chen
Tencent
F
Feiqiang Sun
Tencent
G
Guo Chen
Hunan University