🤖 AI Summary
Existing autoencoders exhibit insufficient robustness in embedding and reconstruction under out-of-distribution (OOD) samples and fail to achieve parametric, invertible high-dimensional projections. To address this, we propose the Differential Entropy-Regularized Variational Autoencoder (DE-VAE), the first VAE framework to explicitly incorporate differential entropy—thereby modeling uncertainty in both projection and back-projection processes—and enhancing sensitivity to anomalies and unknown data. Grounded in variational inference, DE-VAE regularizes latent-space structure via differential entropy constraints, preserving embedding fidelity comparable to state-of-the-art autoencoders while substantially improving uncertainty quantification. Evaluated on four benchmark datasets, DE-VAE demonstrates superior OOD robustness and interpretability in both 2D visualization (against UMAP and t-SNE baselines) and original-space reconstruction tasks.
📝 Abstract
Recently, autoencoders (AEs) have gained interest for creating parametric and invertible projections of multidimensional data. Parametric projections make it possible to embed new, unseen samples without recalculating the entire projection, while invertible projections allow the synthesis of new data instances. However, existing methods perform poorly when dealing with out-of-distribution samples in either the data or embedding space. Thus, we propose DE-VAE, an uncertainty-aware variational AE using differential entropy (DE) to improve the learned parametric and invertible projections. Given a fixed projection, we train DE-VAE to learn a mapping into 2D space and an inverse mapping back to the original space. We conduct quantitative and qualitative evaluations on four well-known datasets, using UMAP and t-SNE as baseline projection methods. Our findings show that DE-VAE can create parametric and inverse projections with comparable accuracy to other current AE-based approaches while enabling the analysis of embedding uncertainty.