QHackBench: Benchmarking Large Language Models for Quantum Code Generation Using PennyLane Hackathon Challenges

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Systematic evaluation of large language models’ (LLMs) code generation capabilities for quantum programming remains lacking. Method: This paper introduces QHackBench—the first benchmark grounded in real-world quantum programming competitions (QHack)—focusing on quantum circuit and algorithm implementation using the PennyLane framework. We propose a multi-agent iterative evaluation framework integrated with retrieval-augmented generation (RAG) to enhance accuracy and executability for complex tasks, including variational quantum eigensolvers and quantum machine learning. Contribution/Results: RAG significantly improves long-range dependency modeling; multi-agent collaborative evaluation boosts code execution success rate by 27.3%. The optimized approach achieves a 41.6% average pass-rate improvement over baseline models on QHackBench. We publicly release the benchmark dataset, evaluation framework, and prompt engineering templates—establishing a reproducible, extensible evaluation infrastructure to advance AI-assisted quantum software development.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have demonstrated strong potential in code generation, yet their effectiveness in quantum computing remains underexplored. This paper benchmarks LLMs for PennyLane-based quantum code generation using real-world challenges from the Quantum Hackathon (QHack). We introduce QHackBench, a novel benchmark dataset derived from QHack competitions, and evaluate model performance under vanilla prompting and Retrieval-Augmented Generation (RAG). Our structured evaluation framework assesses functional correctness, syntactic validity, and execution success across varying challenge difficulties. Results indicate that RAG-enhanced models, supplemented with an augmented PennyLane dataset, approximately generate similar results as the standard prompting, particularly in complex quantum algorithms. Additionally, we introduce a multi-agent evaluation pipeline that iteratively refines incorrect solutions, further enhancing execution success rates. To foster further research, we commit to publicly releasing QHackBench, along with our evaluation framework and experimental results, enabling continued advancements in AI-assisted quantum programming.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking LLMs for quantum code generation using PennyLane challenges
Evaluating model performance with vanilla prompting and RAG techniques
Assessing functional correctness and execution success in quantum algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

RAG-enhanced models for quantum code generation
Multi-agent pipeline refining incorrect solutions
Public QHackBench dataset for benchmarking
🔎 Similar Papers
No similar papers found.
A
Abdul Basit
eBRAIN Lab, Division of Engineering New York University (NYU) Abu Dhabi , Abu Dhabi, UAE
M
Minghao Shao
eBRAIN Lab, Division of Engineering New York University (NYU) Abu Dhabi , Abu Dhabi, UAE
H
Haider Asif
eBRAIN Lab, Division of Engineering New York University (NYU) Abu Dhabi , Abu Dhabi, UAE
Nouhaila Innan
Nouhaila Innan
Research Team Lead @ eBRAIN Lab, Post-Doctoral Associate, New York University Abu Dhabi
Quantum Machine LearningQuantum AlgorithmsQuantum Computing
M
Muhammad Kashif
eBRAIN Lab, Division of Engineering New York University (NYU) Abu Dhabi , Abu Dhabi, UAE
Alberto Marchisio
Alberto Marchisio
Research Team Lead @ eBRAIN Lab | Post-Doctoral Associate, New York University Abu Dhabi, UAE
machine learninghardware designneuromorphic computingquantum computing
Muhammad Shafique
Muhammad Shafique
Professor, ECE, New York University (AD-UAE, Tandon-USA), Director eBRAIN Lab
Embedded Machine LearningBrain-Inspired ComputingRobust & Energy-Efficient System DesignSmart