Scalable Subset Selection in Linear Mixed Models

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sparse methods for subset selection in high-dimensional linear mixed models (LMMs) suffer from poor scalability and fail to jointly handle fixed and random effects. Method: We propose the first ℓ₀-regularized LMM framework, coupled with a scalable coordinate descent algorithm augmented by local search and penalized quasi-likelihood approximation to address the resulting nonconvex optimization. Contribution/Results: The method achieves efficient variable screening—within seconds to minutes—for problems with thousands of covariates. We establish a finite-sample upper bound on the KL divergence error. Experiments on synthetic data and real-world biological and news datasets demonstrate substantial improvements over sparse linear models that ignore random effects, achieving both high-precision variable selection and predictive accuracy. The approach is applicable to personalized medicine and adaptive marketing.

Technology Category

Application Category

📝 Abstract
Linear mixed models (LMMs), which incorporate fixed and random effects, are key tools for analyzing heterogeneous data, such as in personalized medicine or adaptive marketing. Nowadays, this type of data is increasingly wide, sometimes containing thousands of candidate predictors, necessitating sparsity for prediction and interpretation. However, existing sparse learning methods for LMMs do not scale well beyond tens or hundreds of predictors, leaving a large gap compared with sparse methods for linear models, which ignore random effects. This paper closes the gap with a new $ell_0$ regularized method for LMM subset selection that can run on datasets containing thousands of predictors in seconds to minutes. On the computational front, we develop a coordinate descent algorithm as our main workhorse and provide a guarantee of its convergence. We also develop a local search algorithm to help traverse the nonconvex optimization surface. Both algorithms readily extend to subset selection in generalized LMMs via a penalized quasi-likelihood approximation. On the statistical front, we provide a finite-sample bound on the Kullback-Leibler divergence of the new method. We then demonstrate its excellent performance in synthetic experiments and illustrate its utility on two datasets from biology and journalism.
Problem

Research questions and friction points this paper is trying to address.

Scalable subset selection for high-dimensional linear mixed models
Efficient algorithms for handling thousands of predictors
Improved prediction and interpretation in heterogeneous data analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

New $ell_0$ regularized method for LMMs
Coordinate descent algorithm for scalability
Local search for nonconvex optimization surface
🔎 Similar Papers
No similar papers found.
Ryan Thompson
Ryan Thompson
University of Technology Sydney
Machine Learning
M
Matt P. Wand
School of Mathematical and Physical Sciences, University of Technology Sydney, Ultimo NSW 2007, Australia
J
Joanna J. J. Wang
School of Mathematical and Physical Sciences, University of Technology Sydney, Ultimo NSW 2007, Australia