🤖 AI Summary
This paper introduces the Learning Stabilizers with Noise (LSN) problem: a computational hardness challenge of decoding random stabilizer codes under local depolarizing noise. We formally define LSN, prove that it subsumes the classical Learning Parity with Noise (LPN) problem as a special case, and establish a rigorous worst-case-to-average-case reduction. We place LSN within a novel complexity class—distributed unitary synthesis complexity—characterizing its intrinsic quantum computational structure. Leveraging stabilizer formalism, precise noise modeling, and tailored quantum algorithm design, we provide efficient quantum solvers across multiple noise regimes while rigorously lower-bounding its computational hardness. Our work yields the first natural quantum analogue of the LPN problem, furnishing a foundational security basis for quantum cryptography—including quantum bit commitment—and exposing fundamental limitations in quantum data learning.
📝 Abstract
Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory. Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.