What is in the model? A Comparison of variable selection criteria and model search approaches

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of reliably identifying key predictive variables in regression analysis to uncover underlying scientific mechanisms. We systematically evaluate variable selection strategies—including exhaustive search, greedy search, LASSO path, LASSO cross-validation, and random search—combined with AIC and BIC criteria, across linear and generalized linear models under varying model space sizes (small to high-dimensional). Our empirical comparison on simulated data reveals that BIC paired with exhaustive search (for small spaces) or random search (for large spaces) consistently achieves the highest true positive rate, lowest false discovery rate, and superior stability and reproducibility compared to all other combinations. These findings establish a theoretically grounded yet practically feasible paradigm for high-dimensional variable selection, offering robust, reproducible modeling guidance for scientific inference.

Technology Category

Application Category

📝 Abstract
For many scientific questions, understanding the underlying mechanism is the goal. To help investigators better understand the underlying mechanism, variable selection is a crucial step that permits the identification of the most associated regression variables of interest. A variable selection method consists of model evaluation using an information criterion and a search of the model space. Here, we provide a comprehensive comparison of variable selection methods using performance measures of correct identification rate (CIR), recall, and false discovery rate (FDR). We consider the BIC and AIC for evaluating models, and exhaustive, greedy, LASSO path, and stochastic search approaches for searching the model space; we also consider LASSO using cross validation. We perform simulation studies for linear and generalized linear models that parametrically explore a wide range of realistic sample sizes, effect sizes, and correlations among regression variables. We consider model spaces with a small and larger number of potential regressors. The results show that the exhaustive search BIC and stochastic search BIC outperform the other methods when considering the performance measures on small and large model spaces, respectively. These approaches result in the highest CIR and lowest FDR, which collectively may support long-term efforts towards increasing replicability in research.
Problem

Research questions and friction points this paper is trying to address.

Comparing variable selection criteria and model search approaches
Evaluating performance using identification rate, recall, and false discovery
Identifying optimal methods for increasing research replicability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares variable selection criteria and search methods
Evaluates BIC and AIC with multiple search approaches
Identifies optimal methods using performance metrics CIR and FDR
🔎 Similar Papers
No similar papers found.
S
Shuangshuang Xu
Fralin Biomedical Research Institute, Roanoke, Virginia, 24016, U.S.A.
Marco A. R. Ferreira
Marco A. R. Ferreira
Professor of Statistics, Virginia Tech
Bayesian methodstime series analysisspatial dataspatio-temporal modeling
A
Allison N. Tegge
Fralin Biomedical Research Institute, Roanoke, Virginia, 24016, U.S.A.