DE-VAE: Revealing Uncertainty in Parametric and Inverse Projections with Variational Autoencoders using Differential Entropy

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing autoencoders exhibit insufficient robustness in embedding and reconstruction under out-of-distribution (OOD) samples and fail to achieve parametric, invertible high-dimensional projections. To address this, we propose the Differential Entropy-Regularized Variational Autoencoder (DE-VAE), the first VAE framework to explicitly incorporate differential entropy—thereby modeling uncertainty in both projection and back-projection processes—and enhancing sensitivity to anomalies and unknown data. Grounded in variational inference, DE-VAE regularizes latent-space structure via differential entropy constraints, preserving embedding fidelity comparable to state-of-the-art autoencoders while substantially improving uncertainty quantification. Evaluated on four benchmark datasets, DE-VAE demonstrates superior OOD robustness and interpretability in both 2D visualization (against UMAP and t-SNE baselines) and original-space reconstruction tasks.

Technology Category

Application Category

📝 Abstract
Recently, autoencoders (AEs) have gained interest for creating parametric and invertible projections of multidimensional data. Parametric projections make it possible to embed new, unseen samples without recalculating the entire projection, while invertible projections allow the synthesis of new data instances. However, existing methods perform poorly when dealing with out-of-distribution samples in either the data or embedding space. Thus, we propose DE-VAE, an uncertainty-aware variational AE using differential entropy (DE) to improve the learned parametric and invertible projections. Given a fixed projection, we train DE-VAE to learn a mapping into 2D space and an inverse mapping back to the original space. We conduct quantitative and qualitative evaluations on four well-known datasets, using UMAP and t-SNE as baseline projection methods. Our findings show that DE-VAE can create parametric and inverse projections with comparable accuracy to other current AE-based approaches while enabling the analysis of embedding uncertainty.
Problem

Research questions and friction points this paper is trying to address.

Improving parametric and invertible projections with uncertainty awareness
Handling out-of-distribution samples in data and embedding spaces
Analyzing embedding uncertainty in variational autoencoder-based projections
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses variational autoencoders for uncertainty-aware projections
Employs differential entropy to improve projection accuracy
Enables parametric and inverse mappings with embedding analysis