🤖 AI Summary
This study addresses the challenge of reliably identifying key predictive variables in regression analysis to uncover underlying scientific mechanisms. We systematically evaluate variable selection strategies—including exhaustive search, greedy search, LASSO path, LASSO cross-validation, and random search—combined with AIC and BIC criteria, across linear and generalized linear models under varying model space sizes (small to high-dimensional). Our empirical comparison on simulated data reveals that BIC paired with exhaustive search (for small spaces) or random search (for large spaces) consistently achieves the highest true positive rate, lowest false discovery rate, and superior stability and reproducibility compared to all other combinations. These findings establish a theoretically grounded yet practically feasible paradigm for high-dimensional variable selection, offering robust, reproducible modeling guidance for scientific inference.
📝 Abstract
For many scientific questions, understanding the underlying mechanism is the goal. To help investigators better understand the underlying mechanism, variable selection is a crucial step that permits the identification of the most associated regression variables of interest. A variable selection method consists of model evaluation using an information criterion and a search of the model space. Here, we provide a comprehensive comparison of variable selection methods using performance measures of correct identification rate (CIR), recall, and false discovery rate (FDR). We consider the BIC and AIC for evaluating models, and exhaustive, greedy, LASSO path, and stochastic search approaches for searching the model space; we also consider LASSO using cross validation. We perform simulation studies for linear and generalized linear models that parametrically explore a wide range of realistic sample sizes, effect sizes, and correlations among regression variables. We consider model spaces with a small and larger number of potential regressors. The results show that the exhaustive search BIC and stochastic search BIC outperform the other methods when considering the performance measures on small and large model spaces, respectively. These approaches result in the highest CIR and lowest FDR, which collectively may support long-term efforts towards increasing replicability in research.