PIM-SHERPA: Software Method for On-device LLM Inference by Resolving PIM Memory Attribute and Layout Inconsistencies

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the efficiency bottleneck in on-device large language model inference on Processing-in-Memory (PIM) systems, where mismatches between memory characteristics and weight layouts during the prefill and decode phases degrade performance. The authors propose a purely software-based optimization that, for the first time, identifies and resolves the coordination issue between cached and uncached memory regions in PIM. By introducing DRAM Double Buffering (DDB) and Online Weight Reordering (OWR) with swizzled memory copy, the method dynamically reorganizes data layout prior to GEMM execution. This approach requires no hardware modifications and is deployable on production-grade PIM systems. Evaluated on the Llama-3.2 model, it reduces memory footprint by 47.8%–49.7% compared to the baseline while sustaining near-peak theoretical inference throughput.

Technology Category

Application Category

📝 Abstract
On-device deployments of large language models (LLMs) are rapidly proliferating across mobile and edge platforms. LLM inference comprises a compute-intensive prefill phase and a memory bandwidth-intensive decode phase, and the decode phase has been widely recognized as well-suited to processing-in-memory (PIM) in both academia and industry. However, practical PIM-enabled systems face two obstacles between these phases, a memory attribute inconsistency in which prefill favors placing weights in a cacheable region for reuse whereas decode requires weights in a non-cacheable region to reliably trigger PIM, and a weight layout inconsistency between host-friendly and PIM-aware layouts. To address these problems, we introduce \textit{PIM-SHERPA}, a software-only method for efficient on-device LLM inference by resolving PIM memory attribute and layout inconsistencies. PIM-SHERPA provides two approaches, DRAM double buffering (DDB), which keeps a single PIM-aware weights in the non-cacheable region while prefetching the swizzled weights of the next layer into small cacheable buffers, and online weight rearrangement with swizzled memory copy (OWR), which performs the on-demand swizzled memory copy immediately before GEMM. Compared to a baseline PIM emulation system, PIM-SHERPA achieves approximately 47.8 - 49.7\% memory capacity savings while maintaining comparable performance to the theoretical maximum on the Llama 3.2 model. To the best of our knowledge, this is the first work to identify the memory attribute inconsistency and propose effective solutions on product-level PIM-enabled systems.
Problem

Research questions and friction points this paper is trying to address.

PIM
LLM inference
memory attribute inconsistency
weight layout inconsistency
on-device deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Processing-in-Memory (PIM)
On-device LLM inference
Memory attribute inconsistency
Weight layout optimization
Software-only solution
🔎 Similar Papers
No similar papers found.
S
Sunjung Lee
Samsung Advanced Institute of Technology, Suwon, South Korea
S
Sanghoon Cha
Samsung Advanced Institute of Technology, Suwon, South Korea
H
Hyeonsu Kim
Samsung Advanced Institute of Technology, Suwon, South Korea
S
Seungwoo Seo
Samsung Advanced Institute of Technology, Suwon, South Korea
Y
Yuhwan Ro
Samsung Advanced Institute of Technology, Suwon, South Korea
S
Sukhan Lee
Samsung Electronics, Hwaseong, South Korea
B
Byeongho Kim
Samsung Electronics, Hwaseong, South Korea
Yongjun Park
Yongjun Park
Yonsei University
CompilerComputer architecture
K
Kyomin Sohn
Samsung Electronics, Hwaseong, South Korea
Seungwon Lee
Seungwon Lee
University of Calgary/Alberta Health Services
Jaehoon Yu
Jaehoon Yu
Samsung Electronics
Machine LearningPattern RecognitionComputer VisionComputer Architecture