CoMET: A Compressed Bayesian Mixed-Effects Model for High-Dimensional Tensors

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the computational intractability of Bayesian mixed-effects modeling with high-dimensional tensor-valued covariates, particularly in repeated-measures settings. The authors propose CoMET, the first Bayesian mixed-effects framework that accommodates both tensor-valued fixed and random effects. CoMET achieves scalability through mode-wise random projections to compress the random-effects covariance structure, integrates low-rank tensor decomposition with a marginally structured Horseshoe prior for efficient fixed-effects selection, and employs a collapsed Gibbs sampler to ensure computational efficiency. The method offers nearly linear computational complexity and provable posterior contraction rates in high dimensions. Empirical evaluations on simulated data and real-world applications—including facial expression prediction and music emotion modeling—demonstrate substantial improvements over existing penalized approaches.

Technology Category

Application Category

📝 Abstract
Mixed-effects models are fundamental tools for analyzing clustered and repeated-measures data, but existing high-dimensional methods largely focus on penalized estimation with vector-valued covariates. Bayesian alternatives in this regime are limited, with no sampling-based mixed-effects framework that supports tensor-valued fixed- and random-effects covariates while remaining computationally tractable. We propose the Compressed Mixed-Effects Tensor (CoMET) model for high-dimensional repeated-measures data with scalar responses and tensor-valued covariates. CoMET performs structured, mode-wise random projection of the random-effects covariance, yielding a low-dimensional covariance parameter that admits simple Gaussian prior specification and enables efficient imputation of compressed random-effects. For the mean structure, CoMET leverages a low-rank tensor decomposition and margin-structured Horseshoe priors to enable fixed-effects selection. These design choices lead to an efficient collapsed Gibbs sampler whose computational complexity grows approximately linearly with the tensor covariate dimensions. We establish high-dimensional theoretical guarantees by identifying regularity conditions under which CoMET's posterior predictive risk decays to zero. Empirically, CoMET outperforms penalized competitors across a range of simulation studies and two benchmark applications involving facial-expression prediction and music emotion modeling.
Problem

Research questions and friction points this paper is trying to address.

mixed-effects model
high-dimensional tensors
Bayesian inference
tensor-valued covariates
repeated-measures data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian mixed-effects model
tensor-valued covariates
random projection
low-rank tensor decomposition
Horseshoe prior
🔎 Similar Papers
No similar papers found.