DiffHLS: Differential Learning for High-Level Synthesis QoR Prediction with GNNs and LLM Code Embeddings

📅 2026-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Design space exploration in high-level synthesis (HLS) is hindered by the high computational cost of repeated compilation and the difficulty of accurately predicting the quality of results (QoR) for optimized designs. To address this challenge, this work proposes DiffHLS, a novel framework that integrates graph neural networks with embeddings from a pretrained code large language model. DiffHLS employs a dual-branch architecture to separately model the kernel code and its pragma-induced intermediate representation graphs, and introduces a differential learning mechanism to jointly predict both baseline performance and the performance delta induced by optimizations, thereby replacing conventional absolute QoR regression. Experimental results demonstrate that DiffHLS consistently outperforms baseline methods across four GNN backbones on the PolyBench benchmark and exhibits strong scalability and generalization on the ForgeHLS dataset.

Technology Category

Application Category

📝 Abstract
High-Level Synthesis (HLS) compiles C/C++ into RTL, but exploring pragma-driven optimization choices remains expensive because each design point requires time-consuming synthesis. We propose \textbf{\DiffHLS}, a differential learning framework for HLS Quality-of-Result (QoR) prediction that learns from kernel--design pairs: a kernel baseline and a pragma-inserted design variant. \DiffHLS~encodes kernel and design intermediate-representation graphs with dedicated graph neural network (GNN) branches, and augments the delta pathway with code embeddings from a pretrained code large language model (LLM). Instead of regressing absolute targets directly, we jointly predict the kernel baseline and the design-induced delta, and compose them to obtain the design prediction. On PolyBench, \DiffHLS~attains lower average MAPE than GNN baselines under four GNN backbones, and LLM code embeddings consistently improve over a GNN-only ablation. We further validate scalability on the ForgeHLS dataset.
Problem

Research questions and friction points this paper is trying to address.

High-Level Synthesis
Quality-of-Result prediction
pragma optimization
design space exploration
QoR estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differential Learning
High-Level Synthesis
Graph Neural Networks
Code LLM Embeddings
QoR Prediction
🔎 Similar Papers
No similar papers found.