Sample-Near-Optimal Agnostic Boosting with Improved Running Time

๐Ÿ“… 2026-01-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes the first agnostic boosting algorithm that simultaneously achieves near-optimal sample complexity and polynomial runtime. In the assumption-free agnostic setting, boosting has long faced a fundamental trade-off between statistical efficiency and computational tractability. Addressing this challenge, the authors introduce a novel polynomial-time algorithm grounded in agnostic learning theory. When all other parameters are fixed, the algorithmโ€™s runtime scales polynomially with the sample sizeโ€”a significant improvement over prior methods that required exponential time. This advancement marks the first unification of computational efficiency and near-optimal sample complexity in agnostic boosting, substantially enhancing its practical applicability while maintaining theoretical guarantees.

Technology Category

Application Category

๐Ÿ“ Abstract
Boosting is a powerful method that turns weak learners, which perform only slightly better than random guessing, into strong learners with high accuracy. While boosting is well understood in the classic setting, it is less so in the agnostic case, where no assumptions are made about the data. Indeed, only recently was the sample complexity of agnostic boosting nearly settled arXiv:2503.09384, but the known algorithm achieving this bound has exponential running time. In this work, we propose the first agnostic boosting algorithm with near-optimal sample complexity, running in time polynomial in the sample size when considering the other parameters of the problem fixed.
Problem

Research questions and friction points this paper is trying to address.

agnostic boosting
sample complexity
running time
weak learners
strong learners
Innovation

Methods, ideas, or system contributions that make the work stand out.

agnostic boosting
sample complexity
polynomial time
weak learners
boosting algorithm
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Arthur da Cunha
Aarhus University
M
Miakel Moller Hogsgaard
Aarhus University
Andrea Paudice
Andrea Paudice
Assistant Professor (Tenure Track) at Aarhus University
Learning TheoryMachine LearningOptimization