A Disentangled Low-Rank RNN Framework for Uncovering Neural Connectivity and Dynamics

πŸ“… 2025-11-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Low-rank recurrent neural networks (lrRNNs) effectively capture low-dimensional dynamics of neural population activity but suffer from entangled functional roles of latent dimensions and lack interpretable, functionally decoupled connectivity. To address this, we propose DisRNNβ€”a novel framework that enforces inter-group independence and intra-group flexible coupling within a low-rank recurrent architecture, augmented with partial correlation regularization to achieve functional disentanglement of latent variables. Implemented within a variational autoencoder (VAE) framework, DisRNN is jointly trained and validated on both synthetic data and real neural recordings (macaque M1 spiking data and mouse voltage imaging). Experiments demonstrate that DisRNN significantly improves both the functional decoupling of latent trajectories and the neuroscientific interpretability of low-rank connectivity, outperforming existing lrRNN methods. This work establishes a new paradigm for dissecting functional specialization in neural computation.

Technology Category

Application Category

πŸ“ Abstract
Low-rank recurrent neural networks (lrRNNs) are a class of models that uncover low-dimensional latent dynamics underlying neural population activity. Although their functional connectivity is low-rank, it lacks disentanglement interpretations, making it difficult to assign distinct computational roles to different latent dimensions. To address this, we propose the Disentangled Recurrent Neural Network (DisRNN), a generative lrRNN framework that assumes group-wise independence among latent dynamics while allowing flexible within-group entanglement. These independent latent groups allow latent dynamics to evolve separately, but are internally rich for complex computation. We reformulate the lrRNN under a variational autoencoder (VAE) framework, enabling us to introduce a partial correlation penalty that encourages disentanglement between groups of latent dimensions. Experiments on synthetic, monkey M1, and mouse voltage imaging data show that DisRNN consistently improves the disentanglement and interpretability of learned neural latent trajectories in low-dimensional space and low-rank connectivity over baseline lrRNNs that do not encourage partial disentanglement.
Problem

Research questions and friction points this paper is trying to address.

Low-rank RNNs lack disentanglement interpretations for neural connectivity
Existing models cannot assign distinct computational roles to latent dimensions
Current approaches struggle with interpretable neural dynamics in low-dimensional space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled RNN framework with group-wise independence
Variational autoencoder with partial correlation penalty
Separate latent group evolution for complex computation
πŸ”Ž Similar Papers
No similar papers found.