🤖 AI Summary
This work addresses the challenge of designing multidimensional data projection methods that are simultaneously parametric (enabling zero-shot embedding of unseen points) and invertible (supporting exact reconstruction in the original space). We propose a customized autoencoder framework that jointly optimizes forward and inverse mappings in a 2D latent space. To enhance geometric fidelity and user control, we incorporate t-SNE initialization and a weighted joint loss combining reconstruction accuracy and smoothness—allowing adjustable smoothness strength. We systematically evaluate three autoencoder architectures across four benchmark datasets, providing the first unified quantitative assessment of both parametric generalization and invertibility. Experimental results demonstrate that our method significantly improves bidirectional mapping smoothness, reconstruction fidelity, and generalization stability over standard feedforward networks—particularly on high-dimensional data with complex intrinsic structure.
📝 Abstract
Recently, neural networks have gained attention for creating parametric and invertible multidimensional data projections. Parametric projections allow for embedding previously unseen data without recomputing the projection as a whole, while invertible projections enable the generation of new data points. However, these properties have never been explored simultaneously for arbitrary projection methods. We evaluate three autoencoder (AE) architectures for creating parametric and invertible projections. Based on a given projection, we train AEs to learn a mapping into 2D space and an inverse mapping into the original space. We perform a quantitative and qualitative comparison on four datasets of varying dimensionality and pattern complexity using t-SNE. Our results indicate that AEs with a customized loss function can create smoother parametric and inverse projections than feed-forward neural networks while giving users control over the strength of the smoothing effect.