🤖 AI Summary
This work proposes the first large language model (LLM)-based framework for automatically generating deep neural network (DNN) accelerator simulators, addressing the lengthy development cycles and limited adaptability of traditional hand-coded approaches in the face of rapidly evolving architectures. By leveraging domain-specific prompt engineering, in-context learning (ICL), and chain-of-thought (CoT) reasoning, the framework translates natural language descriptions into high-fidelity, cycle-accurate simulator code in an end-to-end manner. Functional correctness and performance are ensured through iterative feedback loops and validation against the SCALE-Sim benchmark. The generated simulators achieve cycle-level accuracy with less than 1% error and demonstrate significantly higher runtime efficiency compared to manual implementations, drastically reducing development time.
📝 Abstract
This paper presents SimulatorCoder, an agent powered by large language models (LLMs), designed to generate and optimize deep neural network (DNN) accelerator simulators based on natural language descriptions. By integrating domain-specific prompt engineering including In-Context Learning (ICL), Chain-of-Thought (CoT) reasoning, and a multi-round feedback-verification flow, SimulatorCoder systematically transforms high-level functional requirements into efficient, executable, and architecture-aligned simulator code. Experiments based on the customized SCALE-Sim benchmark demonstrate that structured prompting and feedback mechanisms substantially improve both code generation accuracy and simulator performance. The resulting simulators not only maintain cycle-level fidelity with less than 1% error compared to manually implemented counterparts, but also consistently achieve lower simulation runtimes, highlighting the effectiveness of LLM-based methods in accelerating simulator development. Our code is available at https://github.com/xiayuhuan/SimulatorCoder.