🤖 AI Summary
This work addresses the fundamental sample complexity barriers in high-dimensional statistical estimation imposed by information-theoretic lower bounds. The authors propose a novel strategy that integrates Langevin dynamics with iterative averaging, leveraging noise injection and implicit landscape smoothing to surpass these limits without explicitly smoothing the loss function. The method achieves accurate recovery of hidden directions in tensor PCA and single-index models with only \( n \gtrsim d^{k^*/2} \) samples, substantially improving upon the previous best-known requirement of \( n \gtrsim d^{\max(1, k^*-1)} \). This result attains the minimax-optimal sample complexity in the worst case, demonstrating a significant advance in the theoretical understanding and practical feasibility of high-dimensional estimation under structural constraints.
📝 Abstract
Significant recent work has studied the ability of gradient descent to recover a hidden planted direction $\theta^\star \in S^{d-1}$ in different high-dimensional settings, including tensor PCA and single-index models. The key quantity that governs the ability of gradient descent to traverse these landscapes is the information exponent $k^\star$ (Ben Arous et al., (2021)), which corresponds to the order of the saddle at initialization in the population landscape. Ben Arous et al., (2021) showed that $n \gtrsim d^{\max(1, k^\star-1)}$ samples were necessary and sufficient for online SGD to recover $\theta^\star$, and Ben Arous et al., (2020) proved a similar lower bound for Langevin dynamics. More recently, Damian et al., (2023) showed it was possible to circumvent these lower bounds by running gradient descent on a smoothed landscape, and that this algorithm succeeds with $n \gtrsim d^{\max(1, k^\star/2)}$ samples, which is optimal in the worst case. This raises the question of whether it is possible to achieve the same rate without explicit smoothing. In this paper, we show that Langevin dynamics can succeed with $n \gtrsim d^{ k^\star/2 }$ samples if one considers the average iterate, rather than the last iterate. The key idea is that the combination of noise-injection and iterate averaging is able to emulate the effect of landscape smoothing. We apply this result to both the tensor PCA and single-index model settings. Finally, we conjecture that minibatch SGD can also achieve the same rate without adding any additional noise.