🤖 AI Summary
This work systematically identifies that the performance bottleneck in large language model (LLM) inference stems primarily from memory bandwidth and interconnect latency rather than computational capacity, a limitation especially pronounced during autoregressive decoding. To address this challenge, the study proposes four hardware architecture innovations tailored for data centers yet also applicable to mobile scenarios: high-bandwidth flash storage, processing-near-memory (PNM), 3D memory-logic stacking, and low-latency interconnects. Together, these directions form a coherent technical roadmap that offers critical guidance for the design of next-generation AI accelerators, emphasizing memory-centric solutions to overcome the constraints of current systems.
📝 Abstract
Large Language Model (LLM) inference is hard. The autoregressive Decode phase of the underlying Transformer model makes LLM inference fundamentally different from training. Exacerbated by recent AI trends, the primary challenges are memory and interconnect rather than compute. To address these challenges, we highlight four architecture research opportunities: High Bandwidth Flash for 10X memory capacity with HBM-like bandwidth; Processing-Near-Memory and 3D memory-logic stacking for high memory bandwidth; and low-latency interconnect to speedup communication. While our focus is datacenter AI, we also review their applicability for mobile devices.