🤖 AI Summary
This work evaluates the energy efficiency and performance potential of unary arithmetic for matrix multiplication (GEMM) in low-precision deep learning accelerators. It presents the first rigorous post-synthesis hardware assessment of three state-of-the-art unary GEMM architectures—uGEMM, tuGEMM, and tubGEMM—systematically analyzing their behavior across varying bit widths, matrix dimensions, and realistic weight sparsity patterns from actual models, including CNNs and LLaMA2. The results demonstrate that, under specific configurations, unary GEMM can significantly outperform conventional binary designs, offering a promising high-efficiency computing paradigm for edge AI inference and clearly delineating its optimal application scenarios.
📝 Abstract
General matrix multiplication (GEMM) is a fundamental operation in deep learning (DL). With DL moving increasingly toward low precision, recent works have proposed novel unary G EMM designs as an alternative to conventional binary GEMM hardware. A rigorous evaluation of recent unary and binary G EMM designs is needed to assess the potential of unary hardware for future DL compute. This paper focuses on unary GEMM designs for integer-based DL inference and performs a detailed evaluation of three latest unary design proposals, namely, uGEMM, tuGEMM and tubGEMM, by comparing them to a conventional binary G EMM. Rigorous post-synthesis evaluations beyond prior works are performed across varying bit-widths and matrix sizes to assess the designs' tradeoffs and determine optimal sweetspots. Further, we perform weight sparsity analysis across eight pretrained convolutional neural networks (CNNs) and the LLaMA2 large language model (LLM). In this work we demonstrate how unary G EMM can be effectively used for energy-efficient compute in future edge AI accelerators.