Evaluating Autoencoders for Parametric and Invertible Multidimensional Projections

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of designing multidimensional data projection methods that are simultaneously parametric (enabling zero-shot embedding of unseen points) and invertible (supporting exact reconstruction in the original space). We propose a customized autoencoder framework that jointly optimizes forward and inverse mappings in a 2D latent space. To enhance geometric fidelity and user control, we incorporate t-SNE initialization and a weighted joint loss combining reconstruction accuracy and smoothness—allowing adjustable smoothness strength. We systematically evaluate three autoencoder architectures across four benchmark datasets, providing the first unified quantitative assessment of both parametric generalization and invertibility. Experimental results demonstrate that our method significantly improves bidirectional mapping smoothness, reconstruction fidelity, and generalization stability over standard feedforward networks—particularly on high-dimensional data with complex intrinsic structure.

Technology Category

Application Category

📝 Abstract
Recently, neural networks have gained attention for creating parametric and invertible multidimensional data projections. Parametric projections allow for embedding previously unseen data without recomputing the projection as a whole, while invertible projections enable the generation of new data points. However, these properties have never been explored simultaneously for arbitrary projection methods. We evaluate three autoencoder (AE) architectures for creating parametric and invertible projections. Based on a given projection, we train AEs to learn a mapping into 2D space and an inverse mapping into the original space. We perform a quantitative and qualitative comparison on four datasets of varying dimensionality and pattern complexity using t-SNE. Our results indicate that AEs with a customized loss function can create smoother parametric and inverse projections than feed-forward neural networks while giving users control over the strength of the smoothing effect.
Problem

Research questions and friction points this paper is trying to address.

Evaluate autoencoders for parametric and invertible multidimensional projections
Compare AE architectures for learning 2D and inverse mappings
Assess smoothing effects in projections using customized loss functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoencoders enable parametric and invertible projections
Customized loss function improves projection smoothness
Quantitative and qualitative comparison using t-SNE
Frederik L. Dennig
Frederik L. Dennig
University of Konstanz
Visual AnalyticsInformation VisualizationHigh-Dimensional Data
N
Nina Geyer
University of Konstanz, Germany
D
Daniela Blumberg
University of Konstanz, Germany
Yannick Metz
Yannick Metz
Universität Konstanz, ETH Zurich
Deep LearningReinforcement LearningInteractive Machine Learning
D
Daniel A. Keim
University of Konstanz, Germany