Graph neural networks uncover structure and functions underlying the activity of simulated neural assemblies

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the fundamental challenge of deciphering the underlying structural and functional mechanisms—such as connectivity matrices, neuronal cell types, signaling dynamics, and latent external stimuli—from the complex spatiotemporal activity of heterogeneous neural systems. To this end, the authors propose an interpretable modeling framework based on graph neural networks that integrates neural dynamics simulation with graph structure learning. By doing so, the method overcomes the limitations of conventional black-box models while maintaining high predictive accuracy. Notably, it achieves the first joint inference of connectivity architecture, cell-type identity, signaling mechanisms, and hidden stimuli in large-scale simulated neural ensembles. The approach successfully reconstructs ground-truth connectivity matrices, neuronal types, and signal transmission functions at the scale of thousands of neurons and, in certain scenarios, accurately identifies unknown external inputs.

Technology Category

Application Category

📝 Abstract
Graph neural networks trained to predict observable dynamics can be used to decompose the temporal activity of complex heterogeneous systems into simple, interpretable representations. Here we apply this framework to simulated neural assemblies with thousands of neurons and demonstrate that it can jointly reveal the connectivity matrix, the neuron types, the signaling functions, and in some cases hidden external stimuli. In contrast to existing machine learning approaches such as recurrent neural networks and transformers, which emphasize predictive accuracy but offer limited interpretability, our method provides both reliable forecasts of neural activity and interpretable decomposition of the mechanisms governing large neural assemblies.
Problem

Research questions and friction points this paper is trying to address.

graph neural networks
neural assemblies
interpretable representations
connectivity matrix
neural dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

graph neural networks
interpretable representation
neural assemblies
connectivity inference
dynamical systems
🔎 Similar Papers
No similar papers found.
C
Cédric Allier
Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
Larissa Heinrich
Larissa Heinrich
HHMI Janelia Research Campus
M
Magdalena Schneider
Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
Stephan Saalfeld
Stephan Saalfeld
HHMI Janelia Research Campus
Biomedical Image AnalysisComputer VisionMachine Learning