End-to-End Deep Learning for Predicting Metric Space-Valued Outputs

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Regression problems with non-Euclidean structured outputs—such as probability distributions, networks, and symmetric positive-definite matrices—cannot be adequately addressed by conventional vector-space methods. To tackle this, we propose E2M, the first end-to-end deep learning framework that directly models regression in metric spaces. E2M employs neural networks to learn input-dependent weights and computes geometrically aware predictions via the weighted Fréchet mean, thereby bypassing embedding mappings and parametric assumptions while fully preserving the intrinsic geometry of the output space. Theoretically, we establish a universal approximation theorem for E2M and provide convergence guarantees for its entropy-regularized objective. Empirically, E2M achieves significant improvements over state-of-the-art methods on real-world tasks—including human mortality forecasting and New York City taxi network modeling—with performance gains becoming more pronounced under large-sample regimes.

Technology Category

Application Category

📝 Abstract
Many modern applications involve predicting structured, non-Euclidean outputs such as probability distributions, networks, and symmetric positive-definite matrices. These outputs are naturally modeled as elements of general metric spaces, where classical regression techniques that rely on vector space structure no longer apply. We introduce E2M (End-to-End Metric regression), a deep learning framework for predicting metric space-valued outputs. E2M performs prediction via a weighted Fréchet means over training outputs, where the weights are learned by a neural network conditioned on the input. This construction provides a principled mechanism for geometry-aware prediction that avoids surrogate embeddings and restrictive parametric assumptions, while fully preserving the intrinsic geometry of the output space. We establish theoretical guarantees, including a universal approximation theorem that characterizes the expressive capacity of the model and a convergence analysis of the entropy-regularized training objective. Through extensive simulations involving probability distributions, networks, and symmetric positive-definite matrices, we show that E2M consistently achieves state-of-the-art performance, with its advantages becoming more pronounced at larger sample sizes. Applications to human mortality distributions and New York City taxi networks further demonstrate the flexibility and practical utility of the framework.
Problem

Research questions and friction points this paper is trying to address.

Predicting structured non-Euclidean outputs in metric spaces
Developing geometry-aware deep learning without surrogate embeddings
Handling outputs like distributions, networks, and SPD matrices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning framework for metric space-valued outputs
Weighted Fréchet means learned by neural networks
Geometry-aware prediction preserving intrinsic output structure
🔎 Similar Papers
No similar papers found.