🤖 AI Summary
A fundamental “information-computation gap” arises in quantum learning between information extractability and computational feasibility, yet existing tools lack systematic frameworks for characterizing average-case hardness. This work extends the classical low-degree likelihood method to the quantum regime, introducing the quantum low-degree likelihood framework—a unified approach for Gibbs state learning, shallow-circuit state reconstruction, error mitigation, and stabilizer learning. Key contributions include: (i) the first proposal of the quantum planted biclique model, exposing a complexity phase transition between local and entangled measurements; (ii) the first rigorous average-case hardness analysis under adaptive measurements; and (iii) hardness proofs—based on quantum state design and random Hamiltonian modeling—for learning Gibbs states of random sparse nonlocal Hamiltonians, along with the first average-case hardness results for error mitigation and stabilizer learning.
📝 Abstract
In a variety of physically relevant settings for learning from quantum data, designing protocols that can computationally efficiently extract information remains largely an art, and there are important cases where we believe this to be impossible, that is, where there is an information-computation gap. While there is a large array of tools in the classical literature for giving evidence for average-case hardness of statistical inference problems, the corresponding tools in the quantum literature are far more limited. One such framework in the classical literature, the low-degree method, makes predictions about hardness of inference problems based on the failure of estimators given by low-degree polynomials. In this work, we extend this framework to the quantum setting. We establish a general connection between state designs and low-degree hardness. We use this to obtain the first information-computation gaps for learning Gibbs states of random, sparse, non-local Hamiltonians. We also use it to prove hardness for learning random shallow quantum circuit states in a challenging model where states can be measured in adaptively chosen bases. To our knowledge, the ability to model adaptivity within the low-degree framework was open even in classical settings. In addition, we also obtain a low-degree hardness result for quantum error mitigation against strategies with single-qubit measurements. We define a new quantum generalization of the planted biclique problem and identify the threshold at which this problem becomes computationally hard for protocols that perform local measurements. Interestingly, the complexity landscape for this problem shifts when going from local measurements to more entangled single-copy measurements. We show average-case hardness for the"standard"variant of Learning Stabilizers with Noise and for agnostically learning product states.