🤖 AI Summary
Existing RTL quality estimation methods neglect structural semantics of hardware designs, resulting in limited accuracy and generalization. To address this, we propose StructRTL—the first self-supervised framework integrating structure-aware graph learning with cross-stage knowledge distillation. StructRTL models RTL semantics via Control-Data Flow Graphs (CDFGs) and employs graph neural networks for topology-aware representation learning. It further bridges front-end RTL and back-end implementation by leveraging post-mapping netlist features as supervisory signals through knowledge distillation. Additionally, semantic embeddings generated by large language models (LLMs) are incorporated to enhance representation capacity. Evaluated across multiple RTL quality estimation tasks—including synthesis runtime, area, and power prediction—StructRTL consistently outperforms state-of-the-art methods. Ablation studies confirm the complementary benefits of structural modeling and cross-stage supervision, demonstrating that jointly encoding design topology and implementation-aware signals significantly improves estimation fidelity and transferability.
📝 Abstract
Estimating the quality of register transfer level (RTL) designs is crucial in the electronic design automation (EDA) workflow, as it enables instant feedback on key metrics like area and delay without the need for time-consuming logic synthesis. While recent approaches have leveraged large language models (LLMs) to derive embeddings from RTL code and achieved promising results, they overlook the structural semantics essential for accurate quality estimation. In contrast, the control data flow graph (CDFG) view exposes the design's structural characteristics more explicitly, offering richer cues for representation learning. In this work, we introduce a novel structure-aware graph self-supervised learning framework, StructRTL, for improved RTL design quality estimation. By learning structure-informed representations from CDFGs, our method significantly outperforms prior art on various quality estimation tasks. To further boost performance, we incorporate a knowledge distillation strategy that transfers low-level insights from post-mapping netlists into the CDFG predictor. Experiments show that our approach establishes new state-of-the-art results, demonstrating the effectiveness of combining structural learning with cross-stage supervision.