🤖 AI Summary
This paper addresses the optimal estimation of linear, mean-square-continuous functionals—such as average treatment effects and average derivatives—in high-dimensional approximately sparse regression. Existing methods rely on exact sparsity assumptions and thus fail to accommodate approximate sparsity arising from nonparametric basis expansions (e.g., splines, kernels, or polynomials); moreover, their minimax optimality remains unestablished. To bridge this gap, we propose a class of automated debiased machine learning estimators that require only single or weak cross-fitting. These estimators relax the convergence rate requirement on base learners to $n^{-1/4}$—substantially weaker than the conventional product-rate condition. Leveraging minimax theory and semiparametric efficiency bounds, we rigorously establish their $sqrt{n}$-consistency, asymptotic normality, and semiparametric efficiency. Under mild regularity conditions, they achieve the optimal convergence rate for the target functionals.
📝 Abstract
This paper is about the ability and means to root-n consistently and efficiently estimate linear, mean square continuous functionals of a high dimensional, approximately sparse regression. Such objects include a wide variety of interesting parameters such as the covariance between two regression residuals, a coefficient of a partially linear model, an average derivative, and the average treatment effect. We give lower bounds on the convergence rate of estimators of such objects and find that these bounds are substantially larger than in a low dimensional, semiparametric setting. We also give automatic debiased machine learners that are $1/sqrt{n}$ consistent and asymptotically efficient under minimal conditions. These estimators use no cross-fitting or a special kind of cross-fitting to attain efficiency with faster than $n^{-1/4}$ convergence of the regression. This rate condition is substantially weaker than the product of convergence rates of two functions being faster than $1/sqrt{n},$ as required for many other debiased machine learners.