Self-similarity Analysis in Deep Neural Networks

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the hitherto unquantified problem of how self-similarity in the geometric structure of deep neural network hidden spaces influences weight optimization and neuronal dynamics. To this end, we propose a latent-feature network modeling framework: each layer’s output is mapped onto a complex network, and its degree of self-similarity is quantified via hierarchical clustering and power-law distribution analysis. We further design a self-similarity–constrained training mechanism that dynamically regulates the hierarchical structure of feature networks within both MLPs and attention-based architectures. Experiments across multiple benchmark datasets demonstrate substantial improvements in classification accuracy—up to a 6 percentage-point gain. Our core contribution is the first establishment of a quantifiable link between hidden-layer geometric self-similarity and optimization behavior, empirically validating self-similarity as an effective, novel inductive bias for deep learning models.

Technology Category

Application Category

📝 Abstract
Current research has found that some deep neural networks exhibit strong hierarchical self-similarity in feature representation or parameter distribution. However, aside from preliminary studies on how the power-law distribution of weights across different training stages affects model performance,there has been no quantitative analysis on how the self-similarity of hidden space geometry influences model weight optimization, nor is there a clear understanding of the dynamic behavior of internal neurons. Therefore, this paper proposes a complex network modeling method based on the output features of hidden-layer neurons to investigate the self-similarity of feature networks constructed at different hidden layers, and analyzes how adjusting the degree of self-similarity in feature networks can enhance the classification performance of deep neural networks. Validated on three types of networks MLP architectures, convolutional networks, and attention architectures this study reveals that the degree of self-similarity exhibited by feature networks varies across different model architectures. Furthermore, embedding constraints on the self-similarity of feature networks during the training process can improve the performance of self-similar deep neural networks (MLP architectures and attention architectures) by up to 6 percentage points.
Problem

Research questions and friction points this paper is trying to address.

Quantify self-similarity impact on weight optimization in neural networks
Understand dynamic behavior of internal neurons in deep learning
Enhance classification by adjusting self-similarity in feature networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Complex network modeling for self-similarity analysis
Adjusting self-similarity to enhance classification performance
Embedding constraints improve performance by 6%
🔎 Similar Papers
No similar papers found.