An approach to Fisher-Rao metric for infinite dimensional non-parametric information geometry

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In infinite-dimensional nonparametric information geometry, the Fisher–Rao metric’s functional nature renders its inversion intractable—long posing a “computational intractability barrier.” Method: We propose an orthogonal decomposition framework on the tangent space, yielding a finite-dimensional, computable covariate Fisher information matrix (cFIM) under observable covariates. This integrates covariate projection with second-order curvature analysis of the KL divergence within a semiparametric modeling paradigm. Contributions: (i) We prove that the G-entropy equals the trace of cFIM, establishing it as a geometric invariant; (ii) we strengthen the manifold assumption into a testable condition—rank deficiency of cFIM; (iii) we define the information capture ratio, enabling rigorous intrinsic dimension estimation. Our framework provides geometrically grounded, statistically valid tools for quantifying both statistical coverage and model efficiency in explainable AI, and enables verifiable intrinsic dimension inference in high-dimensional settings.

Technology Category

Application Category

📝 Abstract
Being infinite dimensional, non-parametric information geometry has long faced an "intractability barrier" due to the fact that the Fisher-Rao metric is now a functional incurring difficulties in defining its inverse. This paper introduces a novel framework to resolve the intractability with an Orthogonal Decomposition of the Tangent Space ($T_fM=S oplus S^{perp}$), where S represents an observable covariate subspace. Through the decomposition, we derive the Covariate Fisher Information Matrix (cFIM), denoted as $G_f$, which is a finite-dimensional and computable representative of information extractable from the manifold's geometry. Indeed, by proving the Trace Theorem: $H_G(f)= ext{Tr}(G_f)$, we establish a rigorous foundation for the G-entropy previously introduced by us, thereby identifying it not merely as a gradient-based regularizer, but also as a fundamental geometric invariant representing the total explainable statistical information captured by the probability distribution associated with the model. Furthermore, we establish a link between $G_f$ and the second-order derivative (i.e. the curvature) of the KL-divergence, leading to the notion of Covariate Cramér-Rao Lower Bound(CRLB). We demonstrate that $G_f$ is congruent to the Efficient Fisher Information Matrix, thereby providing fundamental limits of variance for semi-parametric estimators. Finally, we apply our geometric framework to the Manifold Hypothesis, lifting the latter from a heuristic assumption into a testable condition of rank-deficiency within the cFIM. By defining the Information Capture Ratio, we provide a rigorous method for estimating intrinsic dimensionality in high-dimensional data. In short, our work bridges the gap between abstract information geometry and the demand of explainable AI, by providing a tractable path for revealing the statistical coverage and the efficiency of non-parametric models.
Problem

Research questions and friction points this paper is trying to address.

Overcoming intractability of Fisher-Rao metric in infinite-dimensional non-parametric information geometry.
Establishing a computable finite-dimensional covariate Fisher information matrix for statistical analysis.
Bridging abstract information geometry with explainable AI for model interpretability.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonal decomposition of tangent space for tractability
Covariate Fisher Information Matrix as finite-dimensional representative
Linking geometry to explainable AI via information capture ratio
🔎 Similar Papers
No similar papers found.