Asymptotic Behavior of Adversarial Training Estimator under π“βˆž-Perturbation

πŸ“… 2024-01-27
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper investigates the asymptotic properties of β„“βˆž-adversarial training estimators under generalized linear models, focusing on sparse recovery when the true parameter vector is zero. We establish theoretically that, under β„“βˆž perturbations, the estimator’s asymptotic distribution assigns positive probability mass at zeroβ€”thereby providing the first rigorous proof of its asymptotic sparse recovery property. Building upon this insight, we propose a two-stage adaptive adversarial training framework that employs data-driven weight tuning to simultaneously achieve variable selection consistency and asymptotically unbiased estimation. Simulation studies and empirical analysis demonstrate that the proposed method significantly outperforms standard adversarial training in both variable identification accuracy and parameter estimation precision.

Technology Category

Application Category

πŸ“ Abstract
Adversarial training has been proposed to protect machine learning models against adversarial attacks. This paper focuses on adversarial training under $ell_infty$-perturbation, which has recently attracted much research attention. The asymptotic behavior of the adversarial training estimator is investigated in the generalized linear model. The results imply that the asymptotic distribution of the adversarial training estimator under $ell_infty$-perturbation could put a positive probability mass at $0$ when the true parameter is $0$, providing a theoretical guarantee of the associated sparsity-recovery ability. Alternatively, a two-step procedure is proposed -- adaptive adversarial training, which could further improve the performance of adversarial training under $ell_infty$-perturbation. Specifically, the proposed procedure could achieve asymptotic variable-selection consistency and unbiasedness. Numerical experiments are conducted to show the sparsity-recovery ability of adversarial training under $ell_infty$-perturbation and to compare the empirical performance between classic adversarial training and adaptive adversarial training.
Problem

Research questions and friction points this paper is trying to address.

Investigates asymptotic behavior of adversarial training under $ell_infty$-perturbation.
Proposes adaptive adversarial training for improved performance and sparsity recovery.
Provides theoretical guarantees for sparsity-recovery ability in generalized linear models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigates adversarial training under $ell_infty$-perturbation.
Proposes adaptive adversarial training for improved performance.
Achieves asymptotic variable-selection consistency and unbiasedness.
πŸ”Ž Similar Papers
No similar papers found.
Y
Yiling Xie
School of Industrial and Systems Engineering, Georgia Institute of Technology
Xiaoming Huo
Xiaoming Huo
Professor, Georgia Institute of Technology
statisticsdata sciencemachine learning