Logic Synthesis Optimization with Predictive Self-Supervision via Causal Transformers

📅 2024-09-16
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing logic synthesis optimization (LSO) models suffer from poor generalization and overfitting in quality-of-results (QoR) prediction, primarily due to scarcity of publicly available circuit benchmarks and limited representational capacity of graph encoders. To address these challenges, this work proposes a causal Transformer-based joint modeling framework: (i) a cross-modal co-attention mechanism jointly encodes circuit graphs and optimization action sequences, enabling structural-action co-representation; and (ii) a novel predictive self-supervised learning paradigm tailored for data-scarce settings, alleviating graph encoding bottlenecks and mitigating policy overfitting in reinforcement learning. Evaluated on EPFL, OABCD, and a private industrial dataset, the method reduces QoR prediction error by 5.74%, 4.35%, and 17.06%, respectively—outperforming state-of-the-art baselines significantly.

Technology Category

Application Category

📝 Abstract
Contemporary hardware design benefits from the abstraction provided by high-level logic gates, streamlining the implementation of logic circuits. Logic Synthesis Optimization (LSO) operates at one level of abstraction within the Electronic Design Automation (EDA) workflow, targeting improvements in logic circuits with respect to performance metrics such as size and speed in the final layout. Recent trends in the field show a growing interest in leveraging Machine Learning (ML) for EDA, notably through ML-guided logic synthesis utilizing policy-based Reinforcement Learning (RL) methods.Despite these advancements, existing models face challenges such as overfitting and limited generalization, attributed to constrained public circuits and the expressiveness limitations of graph encoders. To address these hurdles, and tackle data scarcity issues, we introduce LSOformer, a novel approach harnessing Autoregressive transformer models and predictive SSL to predict the trajectory of Quality of Results (QoR). LSOformer integrates cross-attention modules to merge insights from circuit graphs and optimization sequences, thereby enhancing prediction accuracy for QoR metrics. Experimental studies validate the effectiveness of LSOformer, showcasing its superior performance over baseline architectures in QoR prediction tasks, where it achieves improvements of 5.74%, 4.35%, and 17.06% on the EPFL, OABCD, and proprietary circuits datasets, respectively, in inductive setup.
Problem

Research questions and friction points this paper is trying to address.

Improves logic circuit performance using predictive self-supervised learning.
Addresses overfitting and generalization in machine learning for EDA.
Enhances Quality of Results prediction with transformer models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoregressive transformer models for QoR prediction
Predictive self-supervised learning to enhance accuracy
Cross-attention modules for circuit and sequence insights
🔎 Similar Papers
No similar papers found.
R
Raika Karimi
Huawei Noah’s Ark Lab, Toronto, Canada
Faezeh Faez
Faezeh Faez
Huawei Noah’s Ark Lab, Toronto, Canada
Y
Yingxue Zhang
Huawei Noah’s Ark Lab, Toronto, Canada
X
Xing Li
Huawei Noah’s Ark Lab, Hong Kong, China
L
Lei Chen
Huawei Noah’s Ark Lab, Hong Kong, China
M
Mingxuan Yuan
Huawei Noah’s Ark Lab, Hong Kong, China
Mahdi Biparva
Mahdi Biparva
Senior Research Scientist, Noah's Ark Lab, Huawei Technologies
Agent LearningGraph Representation LearningSelf-Supervised LearningDeep Learning