Exploring Topologies in Quantum Annealing: A Hardware-Aware Perspective

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantum annealing is constrained by fixed hardware topologies—such as the Zephyr graph—leading to inefficient minor embedding of general optimization problem graphs, resulting in excessively long logical qubit chains, heightened noise sensitivity, and poor scalability. To address this, we propose a novel quantum processing unit (QPU) topology design grounded in the Havel–Hakimi graph construction algorithm. Leveraging both simulated minor embedding and rigorous graph-theoretic analysis, we systematically evaluate how alternative topologies affect embedding success rate, average chain length, and scalability with respect to problem size. Our results demonstrate that the Havel–Hakimi topology significantly reduces average chain length and enables smoother, superior scaling of the maximum embeddable problem size as QPU scale increases. This work establishes a new architectural paradigm for embedding-efficient quantum annealers and provides empirical validation for topology-aware QPU design.

Technology Category

Application Category

📝 Abstract
Quantum Annealing (QA) offers a promising framework for solving NP-hard optimization problems, but its effectiveness is constrained by the topology of the underlying quantum hardware. Solving an optimization problem $P$ via QA involves a hardware-aware circuit compilation which requires representing $P$ as a graph $G_P$ and embedding it into the hardware connectivity graph $G_Q$ that defines how qubits connect to each other in a QA-based quantum processing unit (QPU). Minor Embedding (ME) is a possible operational form of this hardware-aware compilation. ME heuristically builds a map that associates each node of $G_P$ -- the logical variables of $P$ -- to a chain of adjacent nodes in $G_Q$ by means of one of its minors, so that the arcs of $G_P$ are preserved as physical connections among qubits in $G_Q$. The static topology of hardwired qubits can clearly lead to inefficient compilations because $G_Q$ cannot be a clique, currently. We propose a methodology and a set of criteria to evaluate how the hardware topology $G_Q$ can negatively affect the embedded problem, thus making the quantum optimization more sensible to noise. We evaluate the result of ME across two QPU topologies: Zephyr graphs (used in current D-Wave systems) and Havel-Hakimi graphs, which allow controlled variation of the average node degree. This enables us to study how the ratio `number of nodes/number of incident arcs per node'affects ME success rates to map $G_P$ into a minor of $G_Q$. Our findings, obtained through ME executed on classical, i.e. non-quantum, architectures, suggest that Havel-Hakimi-based topologies, on average, require shorter qubit chains in the minor of $G_P$, exhibiting smoother scaling of the largest embeddable $G_P$ as the QPU size increases. These characteristics indicate their potential as alternative designs for QA-based QPUs.
Problem

Research questions and friction points this paper is trying to address.

Quantum annealing effectiveness is limited by hardware topology constraints
Minor embedding efficiency depends on qubit connectivity graph structure
Hardware topology affects quantum optimization noise sensitivity and performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hardware-aware compilation using Minor Embedding techniques
Evaluating topology impact via Zephyr and Havel-Hakimi graphs
Proposing Havel-Hakimi topologies for improved quantum optimization
🔎 Similar Papers
No similar papers found.