🤖 AI Summary
This paper refutes Hopkins’ low-degree conjecture, which posits that if the low-degree advantage (LDA) vanishes as the degree $D$ grows, then no $n^{O(D)}$-time noise-tolerant distinguishing algorithm exists. Method: The authors construct two explicit counterexamples—namely, a $k$-uniform hypergraph model and an $n imes n$ real matrix model—in which LDA decays as $n^{-Omega(1)}$, yet quasi-polynomial-time distinguishing algorithms with runtime $n^{O(log n)}$ are designed. Their approach integrates polynomial list decoding under high error rates, permutation-invariant structural modeling, and spectral analysis of matrices. Contribution/Results: This work provides the first unconditional disproof of the widely held assumption that vanishing LDA implies computational hardness. It fundamentally challenges the theoretical foundation of average-case complexity that relies on LDA as a hardness proxy, demonstrating that decay of low-degree statistical signals does not necessarily entail computational intractability.
📝 Abstract
There is a growing body of work on proving hardness results for average-case estimation problems by bounding the low-degree advantage (LDA) - a quantitative estimate of the closeness of low-degree moments - between a null distribution and a related planted distribution. Such hardness results are now ubiquitous not only for foundational average-case problems but also central questions in statistics and cryptography. This line of work is supported by the low-degree conjecture of Hopkins, which postulates that a vanishing degree-$D$ LDA implies the absence of any noise-tolerant distinguishing algorithm with runtime $n^{widetilde{O}(D)}$ whenever 1) the null distribution is product on ${0,1}^{inom{n}{k}}$, and 2) the planted distribution is permutation invariant, that is, invariant under any relabeling $[n]
ightarrow [n]$. In this paper, we disprove this conjecture. Specifically, we show that for any fixed $varepsilon>0$ and $kgeq 2$, there is a permutation-invariant planted distribution on ${0,1}^{inom{n}{k}}$ that has a vanishing degree-$n^{1-O(varepsilon)}$ LDA with respect to the uniform distribution on ${0,1}^{inom{n}{k}}$, yet the corresponding $varepsilon$-noisy distinguishing problem can be solved in $n^{O(log^{1/(k-1)}(n))}$ time. Our construction relies on algorithms for list-decoding for noisy polynomial interpolation in the high-error regime. We also give another construction of a pair of planted and (non-product) null distributions on $mathbb{R}^{n imes n}$ with a vanishing $n^{Omega(1)}$-degree LDA while the largest eigenvalue serves as an efficient noise-tolerant distinguisher. Our results suggest that while a vanishing LDA may still be interpreted as evidence of hardness, developing a theory of average-case complexity based on such heuristics requires a more careful approach.