USEFUSE: Uniform Stride for Enhanced Performance in Fused Layer Architecture of Deep Neural Networks

📅 2024-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high inference latency, low energy efficiency, and severe computational redundancy in CNN deployment on edge devices, this paper proposes a high-efficiency hardware accelerator. Methodologically, it introduces the novel *utile stride* strategy to enhance operational intensity; designs a ReLU-aware invalid convolution skipping mechanism to eliminate redundant computations dynamically; and employs serial-bit sum-of-products (SOP) arithmetic with multi-layer fusion, coupled with tile-level uniform data scheduling, to drastically reduce off-chip memory access overhead. The architecture supports dual-mode configuration—low-latency and resource-constrained—while preserving model accuracy. Experimental results demonstrate substantial reductions in both power consumption and inference latency. This work delivers a customized acceleration solution for edge AI, achieving high throughput and ultra-low power consumption without accuracy degradation.

Technology Category

Application Category

📝 Abstract
Convolutional Neural Networks (CNNs) are crucial in various applications, but their deployment on resource-constrained edge devices poses challenges. This study presents the Sum-of-Products (SOP) units for convolution, which utilize low-latency left-to-right bit-serial arithmetic to minimize response time and enhance overall performance. The study proposes a methodology for fusing multiple convolution layers to reduce off-chip memory communication and increase overall performance. An effective mechanism detects and skips inefficient convolutions after ReLU layers, minimizing power consumption without compromising accuracy. Furthermore, efficient tile movement guarantees uniform access to the fusion pyramid. An analysis demonstrates the utile stride strategy improves operational intensity. Two designs cater to varied demands: one focuses on minimal response time for mission-critical applications, and another focuses on resource-constrained devices with comparable latency. This approach notably reduced redundant computations, improving the efficiency of CNN deployment on edge devices.
Problem

Research questions and friction points this paper is trying to address.

Optimizing CNN performance on edge devices with fused layers
Reducing off-chip memory communication in multi-layer convolutions
Minimizing power consumption by skipping inefficient convolutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-latency bit-serial arithmetic for convolution
Fusing convolution layers to reduce memory access
Skipping inefficient convolutions post-ReLU for power savings
🔎 Similar Papers
No similar papers found.