🤖 AI Summary
Large language model (LLM) inference exhibits starkly heterogeneous computational characteristics between the compute-intensive prefill phase and the memory-bound decoding phase; conventional homogeneous hardware underutilizes both memory bandwidth and compute resources, increasing service costs. Method: We propose a “stage-separated” heterogeneous hardware architecture featuring co-designed, dedicated chips for prefill and decoding—enabling dynamic resource reallocation—and integrate systolic-array optimizations, cost-effective GDDR memory, power- and throughput-targeted hardware pruning, and a decoupled inference pipeline for tight software-hardware co-design. Results: Compared to simulated H100-based clusters, our architecture reduces hardware cost by 19–41%, lowers TDP by 2–17%, and maintains equivalent end-to-end latency. With dynamic scheduling, additional cost savings of 11–43% are achieved—all while preserving inference accuracy and functional compatibility.
📝 Abstract
Large Language Models (LLMs) have gained popularity in recent years, driving up the demand for inference. LLM inference is composed of two phases with distinct characteristics: a compute-bound prefill phase followed by a memory-bound decode phase. To efficiently serve LLMs, prior work proposes prefill-decode disaggregation to run each phase on separate hardware. However, existing hardware poorly matches the different requirements of each phase. Current datacenter GPUs and TPUs follow a more-is-better design philosophy that maximizes compute and memory resources, causing memory bandwidth underutilization in the prefill phase and compute underutilization in the decode phase. Such underutilization directly translates into increased serving costs. This paper proposes SPAD (Specialized Prefill and Decode hardware), adopting a less-is-more methodology to design specialized chips tailored to the distinct characteristics of prefill and decode phases. The proposed Prefill Chips have larger systolic arrays and use cost-effective GDDR memory, whereas the proposed Decode Chips retain high memory bandwidth but reduce compute capacity. Compared to modeled H100s, simulations show that the proposed Prefill Chips deliver 8% higher prefill performance on average at 52% lower hardware cost, while the proposed Decode Chips achieve 97% of the decode performance with 28% lower TDP. End-to-end simulations on production traces show that SPAD reduces hardware cost by 19%-41% and TDP by 2%-17% compared to modeled baseline clusters while offering the same performance. Even when models and workloads change, SPAD can reallocate either type of chip to run either phase and still achieve 11%-43% lower hardware costs, demonstrating the longevity of the SPAD design.