Support Vector Machines Classification on Bendable RISC-V

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning deployment in flexible electronics faces critical challenges of large feature dimensions and high power consumption, hindering real-time, energy-efficient edge intelligence. Method: This work proposes a RISC-V-based heterogeneous architecture tailored for ultra-low-power edge AI, featuring a lightweight SVM accelerator with innovative support for one-versus-one (OvO) and one-versus-rest (OvR) multi-class strategies and configurable 4-/8-/16-bit weight quantization. The design tightly integrates a custom ML coprocessor with the open-source RISC-V instruction set to enable efficient vectorized computation and low-bitwidth inference. Contribution/Results: Evaluated on a flexible substrate, the system achieves a 21× average improvement in inference speed and energy efficiency over general-purpose processors. To our knowledge, this is the first open-source framework enabling real-time classification on bendable RISC-V systems, significantly advancing the development of low-cost, high-energy-efficiency flexible intelligent sensing devices.

Technology Category

Application Category

📝 Abstract
Flexible Electronics (FE) technology offers uniquecharacteristics in electronic manufacturing, providing ultra-low-cost, lightweight, and environmentally-friendly alternatives totraditional rigid electronics. These characteristics enable a rangeof applications that were previously constrained by the costand rigidity of conventional silicon technology. Machine learning (ML) is essential for enabling autonomous, real-time intelligenceon devices with smart sensing capabilities in everyday objects. However, the large feature sizes and high power consumption ofthe devices oppose a challenge in the realization of flexible ML applications. To address the above, we propose an open-source framework for developing ML co-processors for the Bendable RISC-V core. In addition, we present a custom ML accelerator architecture for Support Vector Machine (SVM), supporting both one-vs-one (OvO) and one-vs-rest (OvR) algorithms. Our ML accelerator adopts a generic, precision-scalable design, supporting 4-, 8-, and 16-bit weight representations. Experimental results demonstrate a 21x improvement in both inference execution time and energy efficiency, on average, highlighting its potential for low-power, flexible intelligence on the edge.
Problem

Research questions and friction points this paper is trying to address.

Implementing SVM on flexible electronics with low power
Overcoming large feature sizes in flexible ML applications
Designing precision-scalable ML accelerator for Bendable RISC-V
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bendable RISC-V core ML co-processor framework
Custom SVM accelerator supporting OvO and OvR
Precision-scalable design with multiple bit representations
🔎 Similar Papers
No similar papers found.