Qimax: Efficient quantum simulation via GPU-accelerated extended stabilizer formalism

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Simulating high-rank stabilizer-based near-Clifford quantum circuits on multicore devices (e.g., GPUs) suffers from low efficiency and high resource overhead. Method: This work proposes and implements, for the first time, a parallelized architecture based on an extended stabilizer formalism, breaking away from conventional serial update paradigms. It integrates GPU acceleration, CUDA optimization, multithreaded stabilizer updates, and parallel sampling algorithms. Contribution/Results: Implemented in Python, the approach outperforms mainstream simulators—including Qiskit and PennyLane—on diverse near-Clifford circuits, achieving speedups of several-fold to an order of magnitude. It significantly improves throughput and scalability for high-rank scenarios, establishing an efficient, scalable new paradigm for large-scale near-Clifford circuit simulation.

Technology Category

Application Category

📝 Abstract
Simulating Clifford and near-Clifford circuits using the extended stabilizer formalism has become increasingly popular, particularly in quantum error correction. Compared to the state-vector approach, the extended stabilizer formalism can solve the same problems with fewer computational resources, as it operates on stabilizers rather than full state vectors. Most existing studies on near-Clifford circuits focus on balancing the trade-off between the number of ancilla qubits and simulation accuracy, often overlooking performance considerations. Furthermore, in the presence of high-rank stabilizers, performance is limited by the sequential property of the stabilizer formalism. In this work, we introduce a parallelized version of the extended stabilizer formalism, enabling efficient execution on multi-core devices such as GPU. Experimental results demonstrate that, in certain scenarios, our Python-based implementation outperforms state-of-the-art simulators such as Qiskit and Pennylane.
Problem

Research questions and friction points this paper is trying to address.

Efficient quantum simulation using GPU-accelerated extended stabilizer formalism
Improving performance in near-Clifford circuits with parallelized stabilizer methods
Overcoming limitations of sequential stabilizer formalism for high-rank cases
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU-accelerated extended stabilizer formalism
Parallelized execution on multi-core devices
Python-based outperforming Qiskit and Pennylane
🔎 Similar Papers
No similar papers found.