Local Intrinsic Dimension of Representations Predicts Alignment and Generalization in AI Models and Human Brain

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the shared mechanisms underlying the generalization capability of artificial intelligence models, inter-model alignment, and alignment between model representations and human brain neural activity. By introducing local intrinsic dimensionality as a unified geometric metric, the work overcomes the limitations of traditional global dimensionality approaches and demonstrates for the first time that this metric simultaneously predicts model generalization performance, cross-model consistency, and alignment with human brain representations. Integrating representational geometry analysis, local intrinsic dimensionality estimation, and large-scale comparisons between model representations and brain imaging data, the study finds that lower local intrinsic dimensionality correlates with stronger generalization and higher alignment across models and with the brain. Furthermore, increasing model scale and data size enhances performance by effectively reducing this dimensionality.

Technology Category

Application Category

📝 Abstract
Recent work has found that neural networks with stronger generalization tend to exhibit higher representational alignment with one another across architectures and training paradigms. In this work, we show that models with stronger generalization also align more strongly with human neural activity. Moreover, generalization performance, model--model alignment, and model--brain alignment are all significantly correlated with each other. We further show that these relationships can be explained by a single geometric property of learned representations: the local intrinsic dimension of embeddings. Lower local dimension is consistently associated with stronger model--model alignment, stronger model--brain alignment, and better generalization, whereas global dimension measures fail to capture these effects. Finally, we find that increasing model capacity and training data scale systematically reduces local intrinsic dimension, providing a geometric account of the benefits of scaling. Together, our results identify local intrinsic dimension as a unifying descriptor of representational convergence in artificial and biological systems.
Problem

Research questions and friction points this paper is trying to address.

local intrinsic dimension
representational alignment
generalization
model-brain alignment
neural representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

local intrinsic dimension
representational alignment
generalization
model-brain alignment
scaling laws
Junjie Yu
Junjie Yu
Southern University of Science and Technology
Deep LearningNeuroscience
W
Wenxiao Ma
Department of Biomedical Engineering, Southern University of Science and Technology
C
Chen Wei
Department of Biomedical Engineering, Southern University of Science and Technology
J
Jianyu Zhang
Department of Biomedical Engineering, Southern University of Science and Technology
Haotian Deng
Haotian Deng
ByteDance
Computer Networking
Z
Zihan Deng
Department of Biomedical Engineering, Southern University of Science and Technology
Q
Quanying Liu
Department of Biomedical Engineering, Southern University of Science and Technology