🤖 AI Summary
This paper investigates the statistical robustness of nonparametric regression under input perturbations (X-attacks). Addressing the lack of theoretical optimality guarantees in existing adversarial learning methods, we derive the exact minimax convergence rate for the adversarial $L_q$ risk ($1 leq q leq infty$). Based on this characterization, we propose a class of piecewise local polynomial estimators and rigorously prove that they achieve the derived minimax rate—yielding the first theoretically optimal adversarially robust nonparametric estimator. Furthermore, we construct an adaptive variant that attains the same optimal rate up to a logarithmic factor without requiring prior knowledge of the underlying function’s smoothness. Our work unifies the fundamental statistical limits and constructive algorithms for nonparametric regression under X-attacks, thereby filling a critical gap in the statistical theory of adversarial robustness.
📝 Abstract
Despite tremendous advancements of machine learning models and algorithms in various application domains, they are known to be vulnerable to subtle, natural or intentionally crafted perturbations in future input data, known as adversarial attacks. While numerous adversarial learning methods have been proposed, fundamental questions about their statistical optimality in robust loss remain largely unanswered. In particular, the minimax rate of convergence and the construction of rate-optimal estimators under future $X$-attacks are yet to be worked out. In this paper, we address this issue in the context of nonparametric regression, under suitable assumptions on the smoothness of the regression function and the geometric structure of the input perturbation set. We first establish the minimax rate of convergence under adversarial $L_q$-risks with $1 leq q leq infty$ and propose a piecewise local polynomial estimator that achieves the minimax optimality. The established minimax rate elucidates how the smoothness level and perturbation magnitude affect the fundamental limit of adversarial learning under future $X$-attacks. Furthermore, we construct a data-driven adaptive estimator that is shown to achieve, within a logarithmic factor, the optimal rate across a broad scale of nonparametric and adversarial classes.