🤖 AI Summary
This work addresses the efficiency bottleneck in on-device large language model inference on Processing-in-Memory (PIM) systems, where mismatches between memory characteristics and weight layouts during the prefill and decode phases degrade performance. The authors propose a purely software-based optimization that, for the first time, identifies and resolves the coordination issue between cached and uncached memory regions in PIM. By introducing DRAM Double Buffering (DDB) and Online Weight Reordering (OWR) with swizzled memory copy, the method dynamically reorganizes data layout prior to GEMM execution. This approach requires no hardware modifications and is deployable on production-grade PIM systems. Evaluated on the Llama-3.2 model, it reduces memory footprint by 47.8%–49.7% compared to the baseline while sustaining near-peak theoretical inference throughput.
📝 Abstract
On-device deployments of large language models (LLMs) are rapidly proliferating across mobile and edge platforms. LLM inference comprises a compute-intensive prefill phase and a memory bandwidth-intensive decode phase, and the decode phase has been widely recognized as well-suited to processing-in-memory (PIM) in both academia and industry. However, practical PIM-enabled systems face two obstacles between these phases, a memory attribute inconsistency in which prefill favors placing weights in a cacheable region for reuse whereas decode requires weights in a non-cacheable region to reliably trigger PIM, and a weight layout inconsistency between host-friendly and PIM-aware layouts. To address these problems, we introduce \textit{PIM-SHERPA}, a software-only method for efficient on-device LLM inference by resolving PIM memory attribute and layout inconsistencies. PIM-SHERPA provides two approaches, DRAM double buffering (DDB), which keeps a single PIM-aware weights in the non-cacheable region while prefetching the swizzled weights of the next layer into small cacheable buffers, and online weight rearrangement with swizzled memory copy (OWR), which performs the on-demand swizzled memory copy immediately before GEMM. Compared to a baseline PIM emulation system, PIM-SHERPA achieves approximately 47.8 - 49.7\% memory capacity savings while maintaining comparable performance to the theoretical maximum on the Llama 3.2 model. To the best of our knowledge, this is the first work to identify the memory attribute inconsistency and propose effective solutions on product-level PIM-enabled systems.