π€ AI Summary
Latent Variable Multi-Output Gaussian Processes (LV-MOGPs) suffer from linear computational complexity growth in the output dimension, severely limiting scalability to high-dimensional output settings.
Method: We propose the first stochastic variational inference framework for LV-MOGPs supporting *dual batching*βsimultaneous mini-batching over both inputs and outputs. Our approach enables scalable variational inference over output dimensions via kernelized output correlation modeling and a novel two-path mini-batch sampling strategy, reducing per-iteration complexity to output-dimension-independent.
Contribution/Results: Theoretically grounded and empirically validated, our method accelerates training by over 10Γ on 100-dimensional output tasks while preserving state-of-the-art predictive accuracy. Moreover, it supports zero-shot generalization to unseen outputs, substantially broadening the applicability of LV-MOGPs to large-scale multi-output regression.
π Abstract
The Multi-Output Gaussian Process is is a popular tool for modelling data from multiple sources. A typical choice to build a covariance function for a MOGP is the Linear Model of Coregionalization (LMC) which parametrically models the covariance between outputs. The Latent Variable MOGP (LV-MOGP) generalises this idea by modelling the covariance between outputs using a kernel applied to latent variables, one per output, leading to a flexible MOGP model that allows efficient generalization to new outputs with few data points. Computational complexity in LV-MOGP grows linearly with the number of outputs, which makes it unsuitable for problems with a large number of outputs. In this paper, we propose a stochastic variational inference approach for the LV-MOGP that allows mini-batches for both inputs and outputs, making computational complexity per training iteration independent of the number of outputs.