🤖 AI Summary
Existing sparse methods for subset selection in high-dimensional linear mixed models (LMMs) suffer from poor scalability and fail to jointly handle fixed and random effects. Method: We propose the first ℓ₀-regularized LMM framework, coupled with a scalable coordinate descent algorithm augmented by local search and penalized quasi-likelihood approximation to address the resulting nonconvex optimization. Contribution/Results: The method achieves efficient variable screening—within seconds to minutes—for problems with thousands of covariates. We establish a finite-sample upper bound on the KL divergence error. Experiments on synthetic data and real-world biological and news datasets demonstrate substantial improvements over sparse linear models that ignore random effects, achieving both high-precision variable selection and predictive accuracy. The approach is applicable to personalized medicine and adaptive marketing.
📝 Abstract
Linear mixed models (LMMs), which incorporate fixed and random effects, are key tools for analyzing heterogeneous data, such as in personalized medicine or adaptive marketing. Nowadays, this type of data is increasingly wide, sometimes containing thousands of candidate predictors, necessitating sparsity for prediction and interpretation. However, existing sparse learning methods for LMMs do not scale well beyond tens or hundreds of predictors, leaving a large gap compared with sparse methods for linear models, which ignore random effects. This paper closes the gap with a new $ell_0$ regularized method for LMM subset selection that can run on datasets containing thousands of predictors in seconds to minutes. On the computational front, we develop a coordinate descent algorithm as our main workhorse and provide a guarantee of its convergence. We also develop a local search algorithm to help traverse the nonconvex optimization surface. Both algorithms readily extend to subset selection in generalized LMMs via a penalized quasi-likelihood approximation. On the statistical front, we provide a finite-sample bound on the Kullback-Leibler divergence of the new method. We then demonstrate its excellent performance in synthetic experiments and illustrate its utility on two datasets from biology and journalism.