On the Relationship Between Representation Geometry and Generalization in Deep Neural Networks

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the relationship between the geometric structure of deep neural network representations and generalization performance, aiming to predict model accuracy without labeled data. By analyzing representation geometry across diverse pretrained models, the authors introduce unsupervised metrics—such as effective dimensionality and total compression—and conduct causal intervention experiments using PCA and noise injection (e.g., Gaussian, uniform, and Dropout). The work demonstrates, for the first time, that effective dimensionality exhibits domain-agnostic predictive power and a causal influence on model performance across modalities (vision and language) and architectures. On benchmarks including ImageNet, CIFAR-10, SST-2/MNLI, and AG News, effective dimensionality correlates strongly with accuracy (r up to 0.75), whereas model size shows no significant association; moreover, retaining 95% of variance via PCA incurs only a 0.03 percentage point drop in accuracy.

Technology Category

Application Category

📝 Abstract
We investigate the relationship between representation geometry and neural network performance. Analyzing 52 pretrained ImageNet models across 13 architecture families, we show that effective dimension -- an unsupervised geometric metric -- strongly predicts accuracy. Output effective dimension achieves partial r=0.75 ($p<10^(-10)$) after controlling for model capacity, while total compression achieves partial r=-0.72. These findings replicate across ImageNet and CIFAR-10, and generalize to NLP: effective dimension predicts performance for 8 encoder models on SST-2/MNLI and 15 decoder-only LLMs on AG News (r=0.69, p=0.004), while model size does not (r=0.07). We establish bidirectional causality: degrading geometry via noise causes accuracy loss (r=-0.94, $p<10^(-9)$), while improving geometry via PCA maintains accuracy across architectures (-0.03pp at 95% variance). This relationship is noise-type agnostic -- Gaussian, Uniform, Dropout, and Salt-and-pepper noise all show $|r|>0.90$. These results establish that effective dimension provides domain-agnostic predictive and causal information about neural network performance, computed entirely without labels.
Problem

Research questions and friction points this paper is trying to address.

representation geometry
generalization
effective dimension
neural network performance
domain-agnostic
Innovation

Methods, ideas, or system contributions that make the work stand out.

effective dimension
representation geometry
generalization
causality
unsupervised metric