π€ AI Summary
In approximate computing design space exploration, accuracy prediction traditionally relies on full synthesis, and existing machine learning methods suffer from poor generalizability and frequent retraining requirements. Method: This paper proposes a pre-trained graph neural network (GNN)-based parameter prediction framework. Its core innovation lies in replacing handcrafted error metrics with learned component embeddings, enabling strong cross-circuit structural transferability, and supporting multi-task joint qualityβcost prediction with rapid fine-tuning. Contribution/Results: Evaluated on image convolutional filters, the framework reduces mean squared error by 50% compared to conventional approaches. Without fine-tuning, it outperforms statistical learning methods by 30% in accuracy; with fine-tuning, accuracy improvement reaches 54%.
π Abstract
Approximate computing offers promising energy efficiency benefits for error-tolerant applications, but discovering optimal approximations requires extensive design space exploration (DSE). Predicting the accuracy of circuits composed of approximate components without performing complete synthesis remains a challenging problem. Current machine learning approaches used to automate this task require retraining for each new circuit configuration, making them computationally expensive and time-consuming. This paper presents ApproxGNN, a construction methodology for a pre-trained graph neural network model predicting QoR and HW cost of approximate accelerators employing approximate adders from a library. This approach is applicable in DSE for assignment of approximate components to operations in accelerator. Our approach introduces novel component feature extraction based on learned embeddings rather than traditional error metrics, enabling improved transferability to unseen circuits. ApproxGNN models can be trained with a small number of approximate components, supports transfer to multiple prediction tasks, utilizes precomputed embeddings for efficiency, and significantly improves accuracy of the prediction of approximation error. On a set of image convolutional filters, our experimental results demonstrate that the proposed embeddings improve prediction accuracy (mean square error) by 50% compared to conventional methods. Furthermore, the overall prediction accuracy is 30% better than statistical machine learning approaches without fine-tuning and 54% better with fast finetuning.