🤖 AI Summary
This work addresses the efficient estimation of pathwise differentiable functionals in nonparametric models by proposing a novel approach that bypasses explicit computation of the efficient influence function. The method innovatively formulates the universal least favorable submodel as a nonlinear ordinary differential equation on the space of probability densities and constructs a data-adaptive debiasing flow within a reproducing kernel Hilbert space (RKHS) to yield a plug-in estimator. This framework enables simultaneous, numerically stable, and efficient estimation of a broad class of pathwise differentiable parameters. Under standard regularity conditions, the proposed estimator is regular and asymptotically linear, achieving the semiparametric efficiency bound. Finite-sample simulations corroborate its theoretical properties and practical utility.
📝 Abstract
We propose ULFS-KDPE, a kernel debiased plug-in estimator based on the universal least favorable submodel, for estimating pathwise differentiable parameters in nonparametric models. The method constructs a data-adaptive debiasing flow in a reproducing kernel Hilbert space (RKHS), producing a plug-in estimator that achieves semiparametric efficiency without requiring explicit derivation or evaluation of efficient influence functions. We place ULFS-KDPE on a rigorous functional-analytic foundation by formulating the universal least favorable update as a nonlinear ordinary differential equation on probability densities. We establish existence, uniqueness, stability, and finite-time convergence of the empirical score along the induced flow. Under standard regularity conditions, the resulting estimator is regular, asymptotically linear, and attains the semiparametric efficiency bound simultaneously for a broad class of pathwise differentiable parameters. The method admits a computationally tractable implementation based on finite-dimensional kernel representations and principled stopping criteria. In finite samples, the combination of solving a rich collection of score equations with RKHS-based smoothing and avoidance of direct influence-function evaluation leads to improved numerical stability. Simulation studies illustrate the method and support the theoretical results.