🤖 AI Summary
Systematic evaluation of large language models’ (LLMs) code generation capabilities for quantum programming remains lacking.
Method: This paper introduces QHackBench—the first benchmark grounded in real-world quantum programming competitions (QHack)—focusing on quantum circuit and algorithm implementation using the PennyLane framework. We propose a multi-agent iterative evaluation framework integrated with retrieval-augmented generation (RAG) to enhance accuracy and executability for complex tasks, including variational quantum eigensolvers and quantum machine learning.
Contribution/Results: RAG significantly improves long-range dependency modeling; multi-agent collaborative evaluation boosts code execution success rate by 27.3%. The optimized approach achieves a 41.6% average pass-rate improvement over baseline models on QHackBench. We publicly release the benchmark dataset, evaluation framework, and prompt engineering templates—establishing a reproducible, extensible evaluation infrastructure to advance AI-assisted quantum software development.
📝 Abstract
Recent advances in Large Language Models (LLMs) have demonstrated strong potential in code generation, yet their effectiveness in quantum computing remains underexplored. This paper benchmarks LLMs for PennyLane-based quantum code generation using real-world challenges from the Quantum Hackathon (QHack). We introduce QHackBench, a novel benchmark dataset derived from QHack competitions, and evaluate model performance under vanilla prompting and Retrieval-Augmented Generation (RAG). Our structured evaluation framework assesses functional correctness, syntactic validity, and execution success across varying challenge difficulties. Results indicate that RAG-enhanced models, supplemented with an augmented PennyLane dataset, approximately generate similar results as the standard prompting, particularly in complex quantum algorithms. Additionally, we introduce a multi-agent evaluation pipeline that iteratively refines incorrect solutions, further enhancing execution success rates. To foster further research, we commit to publicly releasing QHackBench, along with our evaluation framework and experimental results, enabling continued advancements in AI-assisted quantum programming.