🤖 AI Summary
This study addresses the fundamental challenge of deciphering the underlying structural and functional mechanisms—such as connectivity matrices, neuronal cell types, signaling dynamics, and latent external stimuli—from the complex spatiotemporal activity of heterogeneous neural systems. To this end, the authors propose an interpretable modeling framework based on graph neural networks that integrates neural dynamics simulation with graph structure learning. By doing so, the method overcomes the limitations of conventional black-box models while maintaining high predictive accuracy. Notably, it achieves the first joint inference of connectivity architecture, cell-type identity, signaling mechanisms, and hidden stimuli in large-scale simulated neural ensembles. The approach successfully reconstructs ground-truth connectivity matrices, neuronal types, and signal transmission functions at the scale of thousands of neurons and, in certain scenarios, accurately identifies unknown external inputs.
📝 Abstract
Graph neural networks trained to predict observable dynamics can be used to decompose the temporal activity of complex heterogeneous systems into simple, interpretable representations. Here we apply this framework to simulated neural assemblies with thousands of neurons and demonstrate that it can jointly reveal the connectivity matrix, the neuron types, the signaling functions, and in some cases hidden external stimuli. In contrast to existing machine learning approaches such as recurrent neural networks and transformers, which emphasize predictive accuracy but offer limited interpretability, our method provides both reliable forecasts of neural activity and interpretable decomposition of the mechanisms governing large neural assemblies.