🤖 AI Summary
To address the challenges of pragma tuning difficulty, low-efficiency design space exploration (DSE), and heavy reliance on expert knowledge in high-level synthesis (HLS), this paper proposes the first middle-school-friendly, end-to-end inverse hardware design framework. Our method jointly leverages graph neural networks (GNNs) and conditional variational autoencoders (CVAEs) to directly model the conditional distribution of post-synthesis features, enabling heuristic-free, multi-objective Pareto-optimal conditional generation and generalizable optimization. Experiments across six benchmark circuits demonstrate that our approach achieves a 42.8% improvement in average distance to reference set (ADRS) over the best baseline, significantly shortens DSE turnaround time, and exhibits strong robustness and computational efficiency.
📝 Abstract
High-level synthesis (HLS) has significantly advanced the automation of digital circuits design, yet the need for expertise and time in pragma tuning remains challenging. Existing solutions for the design space exploration (DSE) adopt either heuristic methods, lacking essential information for further optimization potential, or predictive models, missing sufficient generalization due to the time-consuming nature of HLS and the exponential growth of the design space. To address these challenges, we propose Deep Inverse Design for HLS (DID4HLS), a novel approach that integrates graph neural networks and generative models. DID4HLS iteratively optimizes hardware designs aimed at compute-intensive algorithms by learning conditional distributions of design features from post-HLS data. Compared to four state-of-the-art DSE baselines, our method achieved an average improvement of 42.8% on average distance to reference set (ADRS) compared to the best-performing baselines across six benchmarks, while demonstrating high robustness and efficiency.