🤖 AI Summary
Monolithic quantum processors face fundamental scalability bottlenecks in reaching hundreds or thousands of logical qubits.
Method: This work introduces the first resource estimation framework tailored for distributed quantum computing, systematically analyzing the impact of node partitioning, entanglement distillation protocols, and inter-node communication on hardware overhead and execution time. Our approach integrates quantum error-correcting code modeling, distributed execution stack simulation, quantitative modeling of entanglement generation and distillation, and algorithm-level evaluation across diverse hardware configurations.
Contribution/Results: For a 45K-physical-qubit per-node architecture, the distributed design achieves equivalent logical computational capability using only 1.4× the physical qubits and 4× the runtime of a monolithic system—substantially alleviating monolithic scaling pressure. We uncover scaling laws governing the dependence of total resource overhead on node size and entanglement network parameters, and rigorously validate their physical realizability and engineering scalability.
📝 Abstract
To enable practically useful quantum computing, we require hundreds to thousands of logical qubits (collections of physical qubits with error correction). Current monolithic device architectures have scaling limits beyond few tens of logical qubits. To scale up, we require architectures that orchestrate several monolithic devices into a distributed quantum computing system. Currently, resource estimation, which is crucial for determining hardware needs and bottlenecks, focuses exclusively on monolithic systems. Our work fills this gap and answers key architectural design questions about distributed systems, including the impact of distribution on application resource needs, the organization of qubits across nodes and the requirements of entanglement distillation (quantum network). To answer these questions, we develop a novel resource estimation framework that models the key components of the distributed execution stack. We analyse the performance of practical quantum algorithms on various hardware configurations, spanning different qubit speeds, entanglement generation rates and distillation protocols. We show that distributed architectures have practically feasible resource requirements; for a node size of 45K qubits, distributed systems need on average 1.4X higher number of physical qubits and 4X higher execution time compared to monolithic architectures, but with more favourable hardware implementation prospects. Our insights on entanglement generation rates, node sizes and architecture have the potential to inform system designs in the coming years.