Detection Is Harder Than Estimation in Certain Regimes: Inference for Moment and Cumulant Tensors

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the estimation and detection of higher-order moment and cumulant tensors from $n$ independent and identically distributed $p$-dimensional observations. A computationally tractable estimator is constructed via convex feasibility programming over an $\varepsilon$-net, achieving a rate that nearly matches the minimax optimal bound $\sqrt{p/n} \wedge 1$. The computational complexity of detection is analyzed using the low-degree polynomial framework. The key contribution is the identification of a "reverse detection–estimation gap": under the tensor spectral norm, detection becomes computationally harder than estimation when $n \ll p^{d/2}$. This phenomenon reveals, for the first time, that the intrinsic computational difficulty of the loss function itself can induce a novel form of computational–statistical tradeoff.
📝 Abstract
We study estimation and detection of high-order moment and cumulant tensors from $n$ i.i.d. observations of a $p$-dimensional random vector, with performance measured in tensor spectral norm. On the statistical side, we prove that under sub-Gaussianity, the minimax rate for estimating the order-$d$ moment and cumulant tensors is $\sqrt{p/n}\wedge 1$. In contrast to covariance estimation, the sample moment tensor is generally no longer rate-optimal for higher-order moments. We therefore develop an estimator that attains the minimax rate up to logarithmic factors through a convex feasibility formulation over an $\varepsilon$-net of the unit sphere. On the computational side, we study the problem of testing whether the $d$-th order cumulant tensor vanishes after whitening. Using the low-degree polynomial framework, we provide evidence that detection is computationally hard when $n\ll p^{d/2}$. At the same time, we identify a regime in which an efficiently computable estimator attains error smaller than the separation at which low-degree tests can reliably distinguish the null from the alternative. This yields the striking conclusion that computationally efficient detection can be harder than computationally efficient estimation, revealing an unusual reverse detection-estimation gap: in a broad regime, computationally efficient estimation is possible at a smaller scale than computationally efficient detection. This phenomenon arises because the computational difficulty is driven not only by the statistical model, but also by the loss function itself: tensor spectral norm is NP-hard to compute. This feature makes the proposed open problems regarding computational lower bounds for estimation qualitatively different from the existing literature. Our results therefore uncover a new kind of computational--statistical gap.
Problem

Research questions and friction points this paper is trying to address.

tensor estimation
tensor detection
computational-statistical gap
cumulant tensor
minimax rate
Innovation

Methods, ideas, or system contributions that make the work stand out.

tensor spectral norm
minimax estimation
computational-statistical gap
low-degree polynomial framework
cumulant tensor detection
🔎 Similar Papers
No similar papers found.
R
Runshi Tang
Department of Statistics, University of Wisconsin-Madison
Yuefeng Han
Yuefeng Han
University of Notre Dame
Tensor LearningStochastic OptimizationHigh-dimensional StatisticsTime SeriesDeep Learning
A
Anru R. Zhang
Department of Biostatistics & Bioinformatics and Department of Computer Science, Duke University