QuanBench: Benchmarking Quantum Code Generation with Large Language Models

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) lack systematic evaluation for quantum code generation, hindering progress in quantum software development. Method: We introduce QuanBench, the first comprehensive benchmark for quantum programming, comprising 44 tasks spanning quantum algorithms, state preparation, gate decomposition, and quantum machine learning. To assess semantic correctness, we propose quantum process fidelity as a rigorous metric for equivalence verification, complemented by Pass@K for functional correctness evaluation—enabling dual validation. Results: Experiments reveal that current state-of-the-art LLMs achieve less than 40% overall accuracy, with prevalent semantic errors including API misuse, incorrect circuit construction, and algorithmic logic flaws. This work establishes the first standardized evaluation framework for quantum code generation, uncovers fundamental capability bottlenecks of LLMs in quantum domains, and provides a reproducible benchmark with concrete guidance for developing and refining quantum-specialized models.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated good performance in general code generation; however, their capabilities in quantum code generation remain insufficiently studied. This paper presents QuanBench, a benchmark for evaluating LLMs on quantum code generation. QuanBench includes 44 programming tasks that cover quantum algorithms, state preparation, gate decomposition, and quantum machine learning. Each task has an executable canonical solution and is evaluated by functional correctness (Pass@K) and quantum semantic equivalence (Process Fidelity). We evaluate several recent LLMs, including general-purpose and code-specialized models. The results show that current LLMs have limited capability in generating the correct quantum code, with overall accuracy below 40% and frequent semantic errors. We also analyze common failure cases, such as outdated API usage, circuit construction errors, and incorrect algorithm logic. QuanBench provides a basis for future work on improving quantum code generation with LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' capability in generating correct quantum code
Assessing quantum programming tasks across algorithms and machine learning
Identifying common semantic errors in LLM-generated quantum circuits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark evaluates quantum code generation capabilities
Tests functional correctness and quantum semantic equivalence
Analyzes common failure cases in quantum programming
🔎 Similar Papers
No similar papers found.
X
Xiaoyu Guo
Kyushu University
M
Minggu Wang
Kyushu University
Jianjun Zhao
Jianjun Zhao
Kyushu University
Software EngineeringProgramming Languages