🤖 AI Summary
To address the challenges of low latency, low power consumption, and stringent resource constraints when deploying deep learning models on reconfigurable hardware (e.g., FPGAs and ASICs), this paper proposes an open-source, modular hardware–software co-compilation framework. The framework supports mainstream frontends—including TensorFlow and PyTorch—and is compatible with heterogeneous high-level synthesis (HLS) toolchains such as Xilinx Vitis HLS, Intel oneAPI, and Catapult HLS. It integrates model quantization, structured pruning, and hardware-aware scheduling to enable end-to-end automatic generation of synthesizable HLS code. Compared to manual RTL design, our approach significantly improves deployment efficiency: across diverse scientific and industrial inference workloads, it achieves average reductions of 37% in logic resource utilization, 42% in latency, and 31% in power consumption. Experimental results demonstrate strong cross-platform scalability and practical system applicability.
📝 Abstract
We present hls4ml, a free and open-source platform that translates machine learning (ML) models from modern deep learning frameworks into high-level synthesis (HLS) code that can be integrated into full designs for field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs). With its flexible and modular design, hls4ml supports a large number of deep learning frameworks and can target HLS compilers from several vendors, including Vitis HLS, Intel oneAPI and Catapult HLS. Together with a wider eco-system for software-hardware co-design, hls4ml has enabled the acceleration of ML inference in a wide range of commercial and scientific applications where low latency, resource usage, and power consumption are critical. In this paper, we describe the structure and functionality of the hls4ml platform. The overarching design considerations for the generated HLS code are discussed, together with selected performance results.