🤖 AI Summary
This paper studies the targeted robustness evaluation of linear classifiers under label poisoning attacks: given a training set, how to automatically quantify its resilience against attacks that perturb only a few labels to flip the prediction of a specific test instance. The authors first prove that exact computation of this robustness is NP-complete under the constraints of label-only perturbations and limited attacker prior knowledge. To address this intractability, they propose an efficient method based on adversarial label perturbation modeling and optimization, yielding theoretically sound and computationally feasible upper and lower bounds on robustness under hypothesis-space constraints. Experiments on multiple public benchmarks demonstrate the tightness and practical utility of the bounds—exceeding them consistently induces significant performance degradation. Compared to state-of-the-art approaches, the method is broadly applicable, requires no model retraining, and provides the first scalable, automated framework for dataset security assessment.
📝 Abstract
Data poisoning is a training-time attack that undermines the trustworthiness of learned models. In a targeted data poisoning attack, an adversary manipulates the training dataset to alter the classification of a targeted test point. Given the typically large size of training dataset, manual detection of poisoning is difficult. An alternative is to automatically measure a dataset's robustness against such an attack, which is the focus of this paper. We consider a threat model wherein an adversary can only perturb the labels of the training dataset, with knowledge limited to the hypothesis space of the victim's model. In this setting, we prove that finding the robustness is an NP-Complete problem, even when hypotheses are linear classifiers. To overcome this, we present a technique that finds lower and upper bounds of robustness. Our implementation of the technique computes these bounds efficiently in practice for many publicly available datasets. We experimentally demonstrate the effectiveness of our approach. Specifically, a poisoning exceeding the identified robustness bounds significantly impacts test point classification. We are also able to compute these bounds in many more cases where state-of-the-art techniques fail.