🤖 AI Summary
To address the challenge of co-accelerating low- and high-precision matrix multiplication (MatMul) in 1-bit large language model (LLM) inference, this work proposes a heterogeneous in-memory computing (IMC) architecture: analog IMC is dedicated to low-precision 1-bit MatMul in projection layers, while a digital systolic array efficiently executes high-precision MatMul in attention heads. It is the first design to enable cross-substructure collaborative scheduling of IMC and digital arrays on a single chip, supporting mixed-precision joint optimization across attention and feed-forward network layers. Compared to conventional accelerators, the architecture achieves an 80× throughput improvement (tokens/s) and a 70% energy efficiency gain (tokens/J). Against state-of-the-art analog IMC approaches, it delivers 2× higher computational density (GOPs) and 5× better energy efficiency (GOPs/W).
📝 Abstract
In this paper, we propose PIM-LLM, a hybrid architecture developed to accelerate 1-bit large language models (LLMs). PIM-LLM leverages analog processing-in-memory (PIM) architectures and digital systolic arrays to accelerate low-precision matrix multiplication (MatMul) operations in projection layers and high-precision MatMul operations in attention heads of 1-bit LLMs, respectively. Our design achieves up to roughly 80x improvement in tokens per second and a 70% increase in tokens per joule compared to conventional hardware accelerators. Additionally, PIM-LLM outperforms previous PIM-based LLM accelerators, setting a new benchmark with at least 2x and 5x improvement in GOPS and GOPS/W, respectively.