Simplifying Adversarially Robust PAC Learning with Tolerance

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the exponential dependence of sample complexity on VC dimension in adversarially robust PAC learning. We propose a simplified learning framework based on a *tolerance mechanism*. Our key contribution is the first construction of an “almost-fitting” learner that requires no structural assumptions (e.g., separability or geometric constraints) on the hypothesis class ℋ; this is achieved via a tolerance-based definition of adversarial robustness, streamlined compression schemes, and similarity-based hypothesis construction—reducing sample complexity to linear in the VC dimension. We further extend the framework to the semi-supervised setting, eliminating complex subroutines required by prior approaches while preserving theoretical guarantees and significantly enhancing simplicity. Both theoretical analysis and empirical evaluation demonstrate that our semi-supervised variant matches the performance of previous non-tolerant methods, yet achieves greater architectural lightness and implementation directness.

Technology Category

Application Category

📝 Abstract
Adversarially robust PAC learning has proved to be challenging, with the currently best known learners [Montasser et al., 2021a] relying on improper methods based on intricate compression schemes, resulting in sample complexity exponential in the VC-dimension. A series of follow up work considered a slightly relaxed version of the problem called adversarially robust learning with tolerance [Ashtiani et al., 2023, Bhattacharjee et al., 2023, Raman et al., 2024] and achieved better sample complexity in terms of the VC-dimension. However, those algorithms were either improper and complex, or required additional assumptions on the hypothesis class H. We prove, for the first time, the existence of a simpler learner that achieves a sample complexity linear in the VC-dimension without requiring additional assumptions on H. Even though our learner is improper, it is"almost proper"in the sense that it outputs a hypothesis that is"similar"to a hypothesis in H. We also use the ideas from our algorithm to construct a semi-supervised learner in the tolerant setting. This simple algorithm achieves comparable bounds to the previous (non-tolerant) semi-supervised algorithm of Attias et al. [2022a], but avoids the use of intricate subroutines from previous works, and is"almost proper."
Problem

Research questions and friction points this paper is trying to address.

Simplifies adversarially robust PAC learning
Achieves linear VC-dimension sample complexity
Constructs almost proper semi-supervised learner
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simplifies adversarially robust PAC learning
Achieves linear VC-dimension sample complexity
Constructs almost proper semi-supervised learner
🔎 Similar Papers
No similar papers found.