Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling

📅 2024-08-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-dimensional complex data settings, Bayesian inference for Gaussian Process Latent Variable Models (GPLVMs) faces two key challenges in importance-weighted variational learning: difficulty in designing expressive proposal distributions and loose posterior approximations. To address these, this paper introduces Annealed Importance Sampling (AIS) into the GPLVM variational learning framework for the first time, proposing a stochastic gradient-driven AIS-VAE method. Our approach integrates sequential Monte Carlo with variational inference by constructing a differentiable sequence of intermediate distributions and optimizing a reparameterized importance-weighted ELBO. This overcomes the modeling bottleneck of proposal distributions in high-dimensional spaces. Experiments on synthetic and image datasets demonstrate that the proposed method achieves significantly higher ELBO and log-likelihood values, exhibits improved convergence stability, and outperforms state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Gaussian Process Latent Variable Models (GPLVMs) have become increasingly popular for unsupervised tasks such as dimensionality reduction and missing data recovery due to their flexibility and non-linear nature. An importance-weighted version of the Bayesian GPLVMs has been proposed to obtain a tighter variational bound. However, this version of the approach is primarily limited to analyzing simple data structures, as the generation of an effective proposal distribution can become quite challenging in high-dimensional spaces or with complex data sets. In this work, we propose an Annealed Importance Sampling (AIS) approach to address these issues. By transforming the posterior into a sequence of intermediate distributions using annealing, we combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution. We further propose an efficient algorithm by reparameterizing all variables in the evidence lower bound (ELBO). Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations of Bayesian GPLVMs in high-dimensional or complex data.
Proposes an Annealed Importance Sampling approach to improve posterior exploration.
Enhances variational bounds and convergence for unsupervised tasks like dimensionality reduction.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Annealed Importance Sampling for GPLVMs
Reparameterized ELBO for efficient optimization
Combines Sequential Monte Carlo with variational inference