🤖 AI Summary
This work proposes a maximum-margin hyperdimensional computing (HDC) classifier to address the limited classification performance of conventional HDC approaches and their inability to meet the demand for efficient, low-complexity learning on resource-constrained devices. By establishing, for the first time, a theoretical connection between HDC and support vector machines (SVMs), the study integrates the maximum-margin principle into the HDC framework, yielding a learning mechanism that achieves both high accuracy and hardware-friendly implementation. The proposed method significantly outperforms existing HDC techniques across multiple benchmark datasets, delivering improved classification accuracy while maintaining low computational complexity. This advancement offers an efficient and practical solution for edge intelligence applications where computational resources are limited.
📝 Abstract
Overparameterized machine learning (ML) methods such as neural networks may be prohibitively resource intensive for devices with limited computational capabilities. Hyperdimensional computing (HDC) is an emerging resource efficient and low-complexity ML method that allows hardware efficient implementations of (re-)training and inference procedures. In this paper, we propose a maximum-margin HDC classifier, which significantly outperforms baseline HDC methods on several benchmark datasets. Our method leverages a formal relation between HDC and support vector machines (SVMs) that we established for the first time. Our findings may inspire novel HDC methods with potentially more hardware-oriented implementations compared to SVMs, thus enabling more efficient learning solutions for various intelligent resource-constrained applications.