🤖 AI Summary
This work investigates the geometric mechanisms underlying adversarial training of high-dimensional linear classifiers and the robustness–accuracy trade-off. Leveraging the Block Feature Model and a high-dimensional asymptotic analysis framework, we derive the first exact sufficient statistic characterizing the minimizer of the adversarial empirical risk, thereby analytically exposing the interplay among data directionality, feature type, and attack geometry. Our theory reveals that non-robust features possess “defendable directions” — specific orientations where adversarial perturbations can be mitigated without sacrificing standard accuracy. Uniformly protecting such directions significantly enhances robustness while preserving clean-accuracy. Moreover, heterogeneous feature types critically influence sample complexity. Our analysis provides a theoretical explanation for high sample complexity in adversarial settings and enables quantitative, direction-level characterization of both robustness and generalization.
📝 Abstract
This work investigates adversarial training in the context of margin-based linear classifiers in the high-dimensional regime where the dimension $d$ and the number of data points $n$ diverge with a fixed ratio $alpha = n / d$. We introduce a tractable mathematical model where the interplay between the data and adversarial attacker geometries can be studied, while capturing the core phenomenology observed in the adversarial robustness literature. Our main theoretical contribution is an exact asymptotic description of the sufficient statistics for the adversarial empirical risk minimiser, under generic convex and non-increasing losses for a Block Feature Model. Our result allow us to precisely characterise which directions in the data are associated with a higher generalisation/robustness trade-off, as defined by a robustness and a usefulness metric. We show that the the presence of multiple different feature types is crucial to the high sample complexity performances of adversarial training. In particular, we unveil the existence of directions which can be defended without penalising accuracy. Finally, we show the advantage of defending non-robust features during training, identifying a uniform protection as an inherently effective defence mechanism.