🤖 AI Summary
We address the nonconvex, nonsmooth best-subset selection problem under ℓ₀ regularization. We propose a novel dynamic incremental primal-dual algorithm grounded in dual-structure analysis. Our method innovatively integrates dual-range estimation with incremental updates, preserving solution sparsity while substantially reducing redundant computation. We establish theoretical guarantees of global convergence and statistical consistency. Empirical evaluations on synthetic and real-world datasets demonstrate that our algorithm achieves 3–5× speedup over state-of-the-art methods—including L0Learn and BeSS—while simultaneously improving prediction accuracy and model selection consistency. The gains are especially pronounced in high-dimensional sparse settings.
📝 Abstract
Best subset selection is considered the `gold standard' for many sparse learning problems. A variety of optimization techniques have been proposed to attack this non-smooth non-convex problem. In this paper, we investigate the dual forms of a family of $ell_0$-regularized problems. An efficient primal-dual algorithm is developed based on the primal and dual problem structures. By leveraging the dual range estimation along with the incremental strategy, our algorithm potentially reduces redundant computation and improves the solutions of best subset selection. Theoretical analysis and experiments on synthetic and real-world datasets validate the efficiency and statistical properties of the proposed solutions.