🤖 AI Summary
This work addresses the high barrier to FPGA acceleration in scientific and edge computing, where even high-level synthesis (HLS) demands extensive manual optimization. To overcome this challenge, we introduce the first large language model (LLM) agent for FPGA design automation, establishing a closed-loop workflow that automatically transforms generic C++ code into highly optimized Vitis HLS kernels without human intervention. By iteratively leveraging co-simulation and synthesis feedback, our approach integrates advanced optimizations—including deep pipelining, vectorization, and dataflow partitioning—and achieves 99.9% of the geometric mean performance of hand-tuned baselines across 15 representative HPC kernels. Notably, on Stencil workloads, it matches the performance of SODA while offering superior code readability.
📝 Abstract
FPGAs offer high performance, low latency, and energy efficiency for accelerated computing, yet adoption in scientific and edge settings is limited by the specialized hardware expertise required. High-level synthesis (HLS) boosts productivity over HDLs, but competitive designs still demand hardware-aware optimizations and careful dataflow design. We introduce LAAFD, an agentic workflow that uses large language models to translate general-purpose C++ into optimized Vitis HLS kernels. LAAFD automates key transfor mations: deep pipelining, vectorization, and dataflow partitioning and closes the loop with HLS co-simulation and synthesis feedback to verify correctness while iteratively improving execution time in cycles. Over a suite of 15 kernels representing common compute patterns in HPC, LAFFD achieves 99.9% geomean performance when compared to the hand tuned baseline for Vitis HLS. For stencil workloads, LAAFD matches the performance of SODA, a state-of-the-art DSL-based HLS code generator for stencil solvers, while yielding more readable kernels. These results suggest LAAFD substantially lowers the expertise barrier to FPGA acceleration without sacrificing efficiency.