Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the “scaling-as-representation-optimization” optimism, questioning whether larger models inherently yield better internal representations—particularly in single-image generation. Method: We systematically compare representation characteristics of standard SGD-trained networks versus open-ended evolutionary neural networks, introducing the “Fractured Entangled Representation” (FER) hypothesis. Using neural behavior visualization, latent functional mapping, and multi-benchmark evaluation, we analyze representational geometry across optimization paradigms. Contribution/Results: We find that SGD networks exhibit pervasive latent functional disorder and weak inter-neuronal semantic coupling, whereas evolutionary networks consistently produce Unified, Disentangled Representations (UFR). Crucially, networks with identical output performance can possess fundamentally distinct representational structures; FER emerges as a critical bottleneck for generalization, creativity, and continual learning. This is the first systematic study to reveal how optimization paradigms fundamentally shape representation geometry—providing novel theoretical insights and empirical foundations for interpretable and robust AI.

Technology Category

Application Category

📝 Abstract
Much of the excitement in modern AI is driven by the observation that scaling up existing systems leads to better performance. But does better performance necessarily imply better internal representations? While the representational optimist assumes it must, this position paper challenges that view. We compare neural networks evolved through an open-ended search process to networks trained via conventional stochastic gradient descent (SGD) on the simple task of generating a single image. This minimal setup offers a unique advantage: each hidden neuron's full functional behavior can be easily visualized as an image, thus revealing how the network's output behavior is internally constructed neuron by neuron. The result is striking: while both networks produce the same output behavior, their internal representations differ dramatically. The SGD-trained networks exhibit a form of disorganization that we term fractured entangled representation (FER). Interestingly, the evolved networks largely lack FER, even approaching a unified factored representation (UFR). In large models, FER may be degrading core model capacities like generalization, creativity, and (continual) learning. Therefore, understanding and mitigating FER could be critical to the future of representation learning.
Problem

Research questions and friction points this paper is trying to address.

Challenges the link between performance and internal representation quality
Identifies fractured entangled representation in SGD-trained networks
Proposes mitigating FER to enhance generalization and learning capacities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-ended search for neural network evolution
Visualizing neuron behavior as images
Mitigating fractured entangled representation (FER)
🔎 Similar Papers
No similar papers found.