๐ค AI Summary
Greedy forward selection in sparse linear regression suffers from high computational cost and opaque mechanisms, while ensemble methods like random forests lack theoretical interpretability and efficient implementations. Method: This paper proposes a randomized greedy estimator ensemble based on feature subsampling. Contribution/Results: We provide the first rigorous proof that this randomized ensemble simultaneously reduces both training error and model degrees of freedom, thereby reshaping the biasโvariance trade-off. Under orthogonal design, we derive an OLS coefficient rescaling mechanism with logistic weights, revealing its implicit regularization as non-shrinking in nature. Computationally efficient via feature subsampling and dynamic programming, the method significantly outperforms Lasso and elastic net across multiple benchmark datasets, achieving both superior predictive accuracy and enhanced model interpretability.
๐ Abstract
Combining randomized estimators in an ensemble, such as via random forests, has become a fundamental technique in modern data science, but can be computationally expensive. Furthermore, the mechanism by which this improves predictive performance is poorly understood. We address these issues in the context of sparse linear regression by proposing and analyzing an ensemble of greedy forward selection estimators that are randomized by feature subsampling -- at each iteration, the best feature is selected from within a random subset. We design a novel implementation based on dynamic programming that greatly improves its computational efficiency. Furthermore, we show via careful numerical experiments that our method can outperform popular methods such as lasso and elastic net across a wide range of settings. Next, contrary to prevailing belief that randomized ensembling is analogous to shrinkage, we show via numerical experiments that it can simultaneously reduce training error and degrees of freedom, thereby shifting the entire bias-variance trade-off curve of the base estimator. We prove this fact rigorously in the setting of orthogonal features, in which case, the ensemble estimator rescales the ordinary least squares coefficients with a two-parameter family of logistic weights, thereby enlarging the model search space. These results enhance our understanding of random forests and suggest that implicit regularization in general may have more complicated effects than explicit regularization.