Optimizing DNN Inference on Multi-Accelerator SoCs at Training-time

📅 2024-09-27
🏛️ IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Optimizing DNN inference on edge multi-accelerator SoCs faces the core challenge of fine-grained, hardware-aware layer mapping across heterogeneous compute units—requiring joint optimization of accuracy, latency, and energy efficiency. To address this, we propose a training-time hardware-aware intra-layer partitioning and cross-accelerator parallelization framework, which co-optimizes neural architecture and hardware deployment strategies during training—a novel departure from conventional post-training deployment. We further introduce a CU-specific modeling framework tailored to DIANA/Darkside SoCs, enabling Pareto-optimal multi-objective optimization over accuracy, latency, and energy consumption. Experiments on the Darkside SoC demonstrate up to 8× latency reduction at identical accuracy, or a 50.8× energy-efficiency gain with <0.3% accuracy degradation, validated across CIFAR-10, CIFAR-100, and ImageNet benchmarks.

Technology Category

Application Category

📝 Abstract
The demand for executing Deep Neural Networks (DNNs) with low latency and minimal power consumption at the edge has led to the development of advanced heterogeneous Systems-on-Chips (SoCs) that incorporate multiple specialized computing units (CUs), such as accelerators. Offloading DNN computations to a specific CU from the available set often exposes accuracy vs efficiency trade-offs, due to differences in their supported operations (e.g., standard vs. depthwise convolution) or data representations (e.g., more/less aggressively quantized). A challenging yet unresolved issue is how to map a DNN onto these multi-CU systems to maximally exploit the parallelization possibilities while taking accuracy into account. To address this problem, we present ODiMO, a hardware-aware tool that efficiently explores fine-grain mapping of DNNs among various on-chip CUs, during the training phase. ODiMO strategically splits individual layers of the neural network and executes them in parallel on the multiple available CUs, aiming to balance the total inference energy consumption or latency with the resulting accuracy, impacted by the unique features of the different hardware units. We test our approach on CIFAR-10, CIFAR-100, and ImageNet, targeting two open-source heterogeneous SoCs, i.e., DIANA and Darkside. We obtain a rich collection of Pareto-optimal networks in the accuracy vs. energy or latency space. We show that ODiMO reduces the latency of a DNN executed on the Darkside SoC by up to 8x at iso-accuracy, compared to manual heuristic mappings. When targeting energy, on the same SoC, ODiMO produced up to 50.8x more efficient mappings, with minimal accuracy drop (<0.3%).
Problem

Research questions and friction points this paper is trying to address.

Optimize DNN inference on multi-accelerator SoCs
Balance accuracy and efficiency in DNN execution
Explore fine-grain mapping during DNN training phase
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hardware-aware DNN mapping tool
Parallel execution on multi-CUs
Optimizes latency and energy efficiency
🔎 Similar Papers
No similar papers found.