Latent Space Topology Evolution in Multilayer Perceptrons

๐Ÿ“… 2025-06-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the poorly understood topological evolution of hidden-space representations in multilayer perceptrons (MLPs). We propose a modeling framework based on simplicial complex towers and a dual persistent homology analysis method to jointly characterize intra-layer scale variation and inter-layer feature transformation. We introduce the first MLP persistence theory, proving a topological stability theorem; establish a rigorous theoretical link between linear separability in hidden spaces and the connectivity of neural complexes; and design a combinatorial persistent homology computation algorithm with trajectory visualization techniques to track data flow. Evaluated on synthetic and medical datasets, our approach successfully identifies redundant layers, pinpoints critical topological phase transitions, and yields interpretable classification decision basesโ€”thereby significantly enhancing structural interpretability and diagnostic reliability of deep networks.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper introduces a topological framework for interpreting the internal representations of Multilayer Perceptrons (MLPs). We construct a simplicial tower, a sequence of simplicial complexes connected by simplicial maps, that captures how data topology evolves across network layers. Our approach enables bi-persistence analysis: layer persistence tracks topological features within each layer across scales, while MLP persistence reveals how these features transform through the network. We prove stability theorems for our topological descriptors and establish that linear separability in latent spaces is related to disconnected components in the nerve complexes. To make our framework practical, we develop a combinatorial algorithm for computing MLP persistence and introduce trajectory-based visualisations that track data flow through the network. Experiments on synthetic and real-world medical data demonstrate our method's ability to identify redundant layers, reveal critical topological transitions, and provide interpretable insights into how MLPs progressively organise data for classification.
Problem

Research questions and friction points this paper is trying to address.

Interpreting internal representations of Multilayer Perceptrons (MLPs) topologically
Tracking evolution of data topology across MLP layers
Analyzing linear separability via disconnected nerve complexes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Topological framework for MLP internal representations
Bi-persistence analysis of layer and MLP features
Combinatorial algorithm for MLP persistence computation