🤖 AI Summary
This work addresses two key challenges in large language model (LLM)-based Verilog code generation: (1) the difficulty of modeling non-textual hardware representations—such as Karnaugh maps, state transition diagrams, and waveforms—and (2) training instability caused by sensitivity to minor, stochastic errors. To tackle these, we propose two core innovations: (1) a correctness-guaranteed synthetic data construction method for non-textual hardware representations, enabling *correct-by-construction* data generation; and (2) a targeted code repair data auto-generation framework leveraging model error reports, integrated with multi-stage, controllable error injection. After fine-tuning StarCoder2-15B on our synthesized data, we achieve new state-of-the-art pass@1 scores on VerilogEval-Machine (+3.8%), VerilogEval-Human (+10.9%), and RTLLM (+6.6%). These improvements demonstrate substantial gains in functional correctness and robustness for hardware-oriented code generation.
📝 Abstract
Despite the significant progress made in code generation with large language models, challenges persist, especially with hardware description languages such as Verilog. This paper first presents an analysis of fine-tuned LLMs on Verilog coding, with synthetic data from prior methods. We identify two main issues: difficulties in handling non-textual representations (Karnaugh maps, state-transition diagrams and waveforms) and significant variability during training with models randomly making"minor"mistakes. To address these limitations, we enhance data curation by creating correct-by-construction data targeting non-textual representations. Additionally, we introduce an automated framework that generates error reports from various model checkpoints and injects these errors into open-source code to create targeted code repair data. Our fine-tuned Starcoder2-15B outperforms prior state-of-the-art results by 3.8%, 10.9%, 6.6% for pass@1 on VerilogEval-Machine, VerilogEval-Human, and RTLLM.