SPAD: Specialized Prefill and Decode Hardware for Disaggregated LLM Inference

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) inference exhibits starkly heterogeneous computational characteristics between the compute-intensive prefill phase and the memory-bound decoding phase; conventional homogeneous hardware underutilizes both memory bandwidth and compute resources, increasing service costs. Method: We propose a “stage-separated” heterogeneous hardware architecture featuring co-designed, dedicated chips for prefill and decoding—enabling dynamic resource reallocation—and integrate systolic-array optimizations, cost-effective GDDR memory, power- and throughput-targeted hardware pruning, and a decoupled inference pipeline for tight software-hardware co-design. Results: Compared to simulated H100-based clusters, our architecture reduces hardware cost by 19–41%, lowers TDP by 2–17%, and maintains equivalent end-to-end latency. With dynamic scheduling, additional cost savings of 11–43% are achieved—all while preserving inference accuracy and functional compatibility.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have gained popularity in recent years, driving up the demand for inference. LLM inference is composed of two phases with distinct characteristics: a compute-bound prefill phase followed by a memory-bound decode phase. To efficiently serve LLMs, prior work proposes prefill-decode disaggregation to run each phase on separate hardware. However, existing hardware poorly matches the different requirements of each phase. Current datacenter GPUs and TPUs follow a more-is-better design philosophy that maximizes compute and memory resources, causing memory bandwidth underutilization in the prefill phase and compute underutilization in the decode phase. Such underutilization directly translates into increased serving costs. This paper proposes SPAD (Specialized Prefill and Decode hardware), adopting a less-is-more methodology to design specialized chips tailored to the distinct characteristics of prefill and decode phases. The proposed Prefill Chips have larger systolic arrays and use cost-effective GDDR memory, whereas the proposed Decode Chips retain high memory bandwidth but reduce compute capacity. Compared to modeled H100s, simulations show that the proposed Prefill Chips deliver 8% higher prefill performance on average at 52% lower hardware cost, while the proposed Decode Chips achieve 97% of the decode performance with 28% lower TDP. End-to-end simulations on production traces show that SPAD reduces hardware cost by 19%-41% and TDP by 2%-17% compared to modeled baseline clusters while offering the same performance. Even when models and workloads change, SPAD can reallocate either type of chip to run either phase and still achieve 11%-43% lower hardware costs, demonstrating the longevity of the SPAD design.
Problem

Research questions and friction points this paper is trying to address.

Optimizing hardware utilization for LLM prefill and decode phases
Reducing computational waste in disaggregated LLM inference systems
Designing specialized chips to lower inference costs and power consumption
Innovation

Methods, ideas, or system contributions that make the work stand out.

Specialized chips for prefill and decode phases
Prefill chips use large systolic arrays with GDDR
Decode chips reduce compute but keep high bandwidth
H
Hengrui Zhang
Princeton University
Pratyush Patel
Pratyush Patel
University of Washington
Computer SystemsComputer Architecture
August Ning
August Ning
Princeton University
D
D. Wentzlaff
Princeton University