π€ AI Summary
Low-rank recurrent neural networks (lrRNNs) effectively capture low-dimensional dynamics of neural population activity but suffer from entangled functional roles of latent dimensions and lack interpretable, functionally decoupled connectivity. To address this, we propose DisRNNβa novel framework that enforces inter-group independence and intra-group flexible coupling within a low-rank recurrent architecture, augmented with partial correlation regularization to achieve functional disentanglement of latent variables. Implemented within a variational autoencoder (VAE) framework, DisRNN is jointly trained and validated on both synthetic data and real neural recordings (macaque M1 spiking data and mouse voltage imaging). Experiments demonstrate that DisRNN significantly improves both the functional decoupling of latent trajectories and the neuroscientific interpretability of low-rank connectivity, outperforming existing lrRNN methods. This work establishes a new paradigm for dissecting functional specialization in neural computation.
π Abstract
Low-rank recurrent neural networks (lrRNNs) are a class of models that uncover low-dimensional latent dynamics underlying neural population activity. Although their functional connectivity is low-rank, it lacks disentanglement interpretations, making it difficult to assign distinct computational roles to different latent dimensions. To address this, we propose the Disentangled Recurrent Neural Network (DisRNN), a generative lrRNN framework that assumes group-wise independence among latent dynamics while allowing flexible within-group entanglement. These independent latent groups allow latent dynamics to evolve separately, but are internally rich for complex computation. We reformulate the lrRNN under a variational autoencoder (VAE) framework, enabling us to introduce a partial correlation penalty that encourages disentanglement between groups of latent dimensions. Experiments on synthetic, monkey M1, and mouse voltage imaging data show that DisRNN consistently improves the disentanglement and interpretability of learned neural latent trajectories in low-dimensional space and low-rank connectivity over baseline lrRNNs that do not encourage partial disentanglement.