PIM-LLM: A High-Throughput Hybrid PIM Architecture for 1-bit LLMs

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of co-accelerating low- and high-precision matrix multiplication (MatMul) in 1-bit large language model (LLM) inference, this work proposes a heterogeneous in-memory computing (IMC) architecture: analog IMC is dedicated to low-precision 1-bit MatMul in projection layers, while a digital systolic array efficiently executes high-precision MatMul in attention heads. It is the first design to enable cross-substructure collaborative scheduling of IMC and digital arrays on a single chip, supporting mixed-precision joint optimization across attention and feed-forward network layers. Compared to conventional accelerators, the architecture achieves an 80× throughput improvement (tokens/s) and a 70% energy efficiency gain (tokens/J). Against state-of-the-art analog IMC approaches, it delivers 2× higher computational density (GOPs) and 5× better energy efficiency (GOPs/W).

Technology Category

Application Category

📝 Abstract
In this paper, we propose PIM-LLM, a hybrid architecture developed to accelerate 1-bit large language models (LLMs). PIM-LLM leverages analog processing-in-memory (PIM) architectures and digital systolic arrays to accelerate low-precision matrix multiplication (MatMul) operations in projection layers and high-precision MatMul operations in attention heads of 1-bit LLMs, respectively. Our design achieves up to roughly 80x improvement in tokens per second and a 70% increase in tokens per joule compared to conventional hardware accelerators. Additionally, PIM-LLM outperforms previous PIM-based LLM accelerators, setting a new benchmark with at least 2x and 5x improvement in GOPS and GOPS/W, respectively.
Problem

Research questions and friction points this paper is trying to address.

Accelerating 1-bit LLMs with hybrid PIM architecture
Improving efficiency in low-precision matrix multiplication
Enhancing performance in attention heads computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid PIM architecture for 1-bit LLMs
Analog PIM and digital systolic arrays
80x faster tokens per second
🔎 Similar Papers
No similar papers found.
J
Jinendra Malekar
Computer Science and Engineering, University of South Carolina, Columbia, SC 29201
P
Peyton S. Chandarana
Computer Science and Engineering, University of South Carolina, Columbia, SC 29201
M
Md Hasibul Amin
Computer Science and Engineering, University of South Carolina, Columbia, SC 29201
M
Mohammed E. Elbtity
Computer Science and Engineering, University of South Carolina, Columbia, SC 29201
Ramtin Zand
Ramtin Zand
Assistant Professor, University of South Carolina
Edge ComputingNeuromorphic ComputingIn-Memory ComputingMachine LearningProcessing-In-Memory