🤖 AI Summary
This paper establishes the minimax lower bound on the convergence rate of kernelized Stein discrepancy (KSD) estimators. Addressing both multivariate Euclidean spaces and general measure spaces, it provides the first rigorous proof that the optimal convergence rate is $n^{-1/2}$, and further reveals an exponential degradation of this rate with dimension—characterizing a fundamental statistical bottleneck in high dimensions. The analysis unifies Langevin–Stein operators, reproducing kernel Hilbert space theory, and information-theoretic lower bound techniques into a coherent nonparametric framework. Key contributions are: (1) derivation of a tight minimax lower bound for KSD estimation; (2) confirmation that standard U-statistic and V-statistic KSD estimators achieve this optimal rate; and (3) provision of a rigorous theoretical foundation for high-dimensional distribution testing and evaluation of generative models.
📝 Abstract
Kernel Stein discrepancies (KSDs) have emerged as a powerful tool for quantifying goodness-of-fit over the last decade, featuring numerous successful applications. To the best of our knowledge, all existing KSD estimators with known rate achieve $sqrt n$-convergence. In this work, we present two complementary results (with different proof strategies), establishing that the minimax lower bound of KSD estimation is $n^{-1/2}$ and settling the optimality of these estimators. Our first result focuses on KSD estimation on $mathbb R^d$ with the Langevin-Stein operator; our explicit constant for the Gaussian kernel indicates that the difficulty of KSD estimation may increase exponentially with the dimensionality $d$. Our second result settles the minimax lower bound for KSD estimation on general domains.