🤖 AI Summary
Prior research on probabilistic robustness (PR) lacks standardized evaluation protocols and strong baselines, hindering fair comparison and progress. Method: We introduce PRBench—the first dedicated benchmark for PR—comprising seven datasets and ten model architectures, enabling systematic evaluation of 222 models across clean accuracy, adversarial robustness (AR), probabilistic robustness (PR), and generalization error. We propose a unified evaluation framework and theoretical analysis to rigorously assess PR performance. Contribution/Results: Our analysis reveals that standard adversarial training (AT), traditionally considered suboptimal for PR, consistently outperforms existing PR-specific methods across most settings. AT achieves superior robustness and generalization, whereas PR methods—though attaining marginally higher clean accuracy and lower generalization error—exhibit constrained overall PR performance. PRBench provides a reproducible, standardized platform for future PR research, facilitating transparent benchmarking and methodological advancement.
📝 Abstract
Deep learning models are notoriously vulnerable to imperceptible perturbations. Most existing research centers on adversarial robustness (AR), which evaluates models under worst-case scenarios by examining the existence of deterministic adversarial examples (AEs). In contrast, probabilistic robustness (PR) adopts a statistical perspective, measuring the probability that predictions remain correct under stochastic perturbations. While PR is widely regarded as a practical complement to AR, dedicated training methods for improving PR are still relatively underexplored, albeit with emerging progress. Among the few PR-targeted training methods, we identify three limitations: i non-comparable evaluation protocols; ii limited comparisons to strong AT baselines despite anecdotal PR gains from AT; and iii no unified framework to compare the generalization of these methods. Thus, we introduce PRBench, the first benchmark dedicated to evaluating improvements in PR achieved by different robustness training methods. PRBench empirically compares most common AT and PR-targeted training methods using a comprehensive set of metrics, including clean accuracy, PR and AR performance, training efficiency, and generalization error (GE). We also provide theoretical analysis on the GE of PR performance across different training methods. Main findings revealed by PRBench include: AT methods are more versatile than PR-targeted training methods in terms of improving both AR and PR performance across diverse hyperparameter settings, while PR-targeted training methods consistently yield lower GE and higher clean accuracy. A leaderboard comprising 222 trained models across 7 datasets and 10 model architectures is publicly available at https://tmpspace.github.io/PRBenchLeaderboard/.