Disentanglement of Sources in a Multi-Stream Variational Autoencoder

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Source separation and disentangled representation learning for multi-source mixed signals—such as overlaid digits and overlapping speech—remain challenging, especially under unsupervised or weakly supervised settings without domain-specific priors. Method: This paper proposes the Multi-Stream Variational Autoencoder (MS-VAE), a VAE-based end-to-end architecture that explicitly models individual sources via discrete latent variables and couples them with a linear mixing generative mechanism. Contribution/Results: MS-VAE is the first to jointly embed discrete latent variables and an explicit linear combination model into the variational inference pipeline, enabling unsupervised or weakly supervised disentanglement without task-specific assumptions. Evaluated on overlaid MNIST and speaker diarization tasks, it achieves significantly improved separation quality—reducing false-negative rates by up to 32%—and demonstrates strong robustness and generalization across low-data regimes and varying supervision levels.

Technology Category

Application Category

📝 Abstract
Variational autoencoders (VAEs) are a leading approach to address the problem of learning disentangled representations. Typically a single VAE is used and disentangled representations are sought in its continuous latent space. Here we explore a different approach by using discrete latents to combine VAE-representations of individual sources. The combination is done based on an explicit model for source combination, and we here use a linear combination model which is well suited, e.g., for acoustic data. We formally define such a multi-stream VAE (MS-VAE) approach, derive its inference and learning equations, and we numerically investigate its principled functionality. The MS-VAE is domain-agnostic, and we here explore its ability to separate sources into different streams using superimposed hand-written digits, and mixed acoustic sources in a speaker diarization task. We observe a clear separation of digits, and on speaker diarization we observe an especially low rate of missed speakers. Numerical experiments further highlight the flexibility of the approach across varying amounts of supervision and training data.
Problem

Research questions and friction points this paper is trying to address.

Disentangling mixed sources using discrete latent representations
Developing multi-stream VAE for source separation tasks
Addressing speaker diarization with reduced missed detection rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-stream VAE with discrete latent variables
Linear combination model for source separation
Domain-agnostic approach tested on digits and audio
V
Veranika Boukun
Machine Learning Lab, Department of Medical Physics and Acoustics, Carl von Ossietzky Universität Oldenburg, Germany
Jörg Lücke
Jörg Lücke
Professor for Artificial Intelligence, Innsbruck University, Austria
Machine LearningArtificial IntelligencePattern RecognitionComputational Neuroscience