A General Framework for Computational Lower Bounds in Nontrivial Norm Approximation

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates computational lower bounds for approximating nontrivial norms of higher-order tensors, such as the spectral norm. We introduce a general framework that systematically formalizes the detection–estimation gap as a mechanism for establishing computational hardness, realized through the low-degree polynomial method under the low-degree conjecture. Applying this framework to the symmetric tensor spectral norm, we prove that any degree-$D$ algorithm with $D \leq c_d(\log p)^2$ incurs an approximation distortion of at least $p^{d/4 - 1/2}/\mathrm{polylog}(p)$. This lower bound matches existing upper bounds up to polylogarithmic factors in several important regimes, thereby revealing an inherent computational barrier characterized by the exponent $d/4 - 1/2$.
📝 Abstract
In this note, we propose a general framework for proving computational lower bounds in norm approximation by leveraging a reverse detection--estimation gap. The starting point is a testing problem together with an estimator whose error is significantly smaller than the corresponding computational detection threshold. We show that such a gap yields a lower bound on the approximation distortion achievable by any algorithm in the underlying computational class. In this way, reverse detection--estimation gaps can be turned into a general mechanism for certifying the hardness of approximating nontrivial norms. We apply this framework to the spectral norm of order-$d$ symmetric tensors in $\mathbb{R}^{p^d}$. Using a recently established low-degree hardness result for detecting nonzero high-order cumulant tensors, together with an efficiently computable estimator whose error is below the low-degree detection threshold, we prove that any degree-$D$ low-degree algorithm with $D \le c_d(\log p)^2$ must incur distortion at least $p^{d/4-1/2}/\operatorname{polylog}(p)$ for the tensor spectral norm. Under the low-degree conjecture, the same conclusion extends to all polynomial-time algorithms. In several important settings, this lower bound matches the best known upper bounds up to polylogarithmic factors, suggesting that the exponent $d/4-1/2$ captures a genuine computational barrier. Our results provide evidence that the difficulty of approximating tensor spectral norm is not merely an artifact of existing techniques, but reflects a broader computational barrier.
Problem

Research questions and friction points this paper is trying to address.

norm approximation
computational lower bounds
tensor spectral norm
computational hardness
detection–estimation gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

computational lower bounds
reverse detection–estimation gap
tensor spectral norm
low-degree polynomial hardness
norm approximation
🔎 Similar Papers
2024-09-28arXiv.orgCitations: 0
R
Runshi Tang
Department of Statistics, University of Wisconsin-Madison
Yuefeng Han
Yuefeng Han
University of Notre Dame
Tensor LearningStochastic OptimizationHigh-dimensional StatisticsTime SeriesDeep Learning
A
Anru R. Zhang
Department of Biostatistics & Bioinformatics and Department of Computer Science, Duke University