🤖 AI Summary
The design space for multi-computing-element (CE) CNN accelerators is vast, and conventional RTL-based evaluation methods suffer from prohibitively low efficiency. Method: This paper proposes MCCM, a high-accuracy analytical cost model, along with an integrated evaluation methodology. MCCM is the first to jointly model CE architecture, resource allocation, and operator mapping, enabling ultra-fast performance and energy-efficiency estimation—10⁵× faster than RTL synthesis while maintaining >90% accuracy. Leveraging hardware abstraction, CNN-operator-level joint resource–latency modeling, and FPGA constraint quantification, MCCM achieves 10⁴× speedup in evaluation. Contribution/Results: The model enables end-to-end accelerator comparison, bottleneck identification, and large-scale design-space exploration. Experiments demonstrate that MCCM-guided optimization yields customized multi-CE accelerators outperforming state-of-the-art solutions. The model is publicly open-sourced.
📝 Abstract
Convolutional Neural Networks (CNNs) serve various applications with diverse performance and resource requirements. Model-aware CNN accelerators best address these diverse requirements. These accelerators usually combine multiple dedicated Compute Engines (CEs). The flexibility of Field-Programmable Gate Arrays (FPGAs) enables the design of such multiple Compute-Engine (multiple-CE) accelerators. However, existing multiple-CE accelerators differ in how they arrange their CEs and distribute the FPGA resources and CNN operators among the CEs. The design space of multiple-CE accelerators comprises numerous such arrangements, which makes a systematic identification of the best ones an open challenge. This paper proposes a multiple-CE accelerator analytical Cost Model (MCCM) and an evaluation methodology built around MCCM. The model and methodology streamline the expression of any multiple-CE accelerator and provide a fast evaluation of its performance and efficiency. MCCM is in the order of 100000x faster than traditional synthesis-based evaluation and has an average accuracy of>90%. The paper presents three use cases of MCCM. The first describes an end-to-end evaluation of state-of-the-art multiple-CE accelerators considering various metrics, CNN models, and resource budgets. The second describes fine-grained evaluation that helps identify performance bottlenecks of multiple-CE accelerators. The third demonstrates that MCCM fast evaluation enables exploring the vast design space of multiple-CE accelerators. These use cases show that no unique CE arrangement achieves the best results given different metrics, CNN models, and resource budgets. They also show that fast evaluation enables design space exploration, resulting in accelerator designs that outperform state-of-the-art ones. MCCM is available at https://github.com/fqararyah/MCCM.