🤖 AI Summary
Balancing inference efficiency and accuracy for DNNs on edge devices remains challenging. This paper introduces a novel “redundancy–sparsity trade-off” perspective and proposes PLUM, a unified software–hardware co-designed quantization framework. It formally characterizes this trade-off for the first time; introduces a signed binary quantization strategy that significantly improves accuracy under fixed non-zero weight counts; and jointly optimizes forward/backward propagation with hardware-aware tensor representations to enable tensor-level redundancy modeling and sparsity-aware hardware scheduling. Evaluated on ResNet-18/ILSVRC-2012, PLUM achieves 66.2% top-1 accuracy, 26% higher measured inference throughput, 100% better energy efficiency, and 2.8× model density compression—outperforming state-of-the-art binary methods across all metrics.
📝 Abstract
Efficient inference of Deep Neural Networks (DNNs) on resource-constrained edge devices is essential. Quantization and sparsity are key techniques that translate to repetition and sparsity within tensors at the hardware-software interface. This paper introduces the concept of repetition-sparsity trade-off that helps explain computational efficiency during inference. We propose PLUM, a unified co-design framework that integrates DNN inference systems and quantization (forward and backward pass) to leverage the repetition-sparsity trade-off to improve inference efficiency. Our results demonstrate that PLUM's quantization method is more accurate than binary quantization with the same number of non-zero weights. Detailed analysis indicates that signed binarization generates a smaller distribution of effectual (non-zero) parameters nested within a larger distribution of total parameters of latent full-precision weights for a DNN block. Finally, the proposed PLUM framework achieves a 26% speedup on real hardware, doubles energy efficiency, and reduces density by 2.8x compared to binary methods while retaining top-1 accuracy when compared to prior-art methods for ResNets on ImageNet (by achieving 66.2% top-1 accuracy), presenting an alternative solution for deploying efficient models in resource-limited environments.