CITS: Nonparametric Statistical Causal Modeling for High-Resolution Neural Time Series

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of inferring statistically causal neural circuits from high-resolution neurotemporal data, overcoming limitations of conventional methods that rely on linearity, Gaussianity, and i.i.d. assumptions to accurately characterize directional signal propagation and underlying mechanisms. We propose Causal Iterative Temporal Structure learning (CITS), a nonparametric method for causal time-series structure learning that supports arbitrary Markov orders, integrates Gaussian or distribution-free lagged conditional independence tests, and is proven consistent under weak mixing conditions. CITS unifies structural causal modeling, nonparametric conditional independence testing, and Neuropixels-specific preprocessing techniques. Evaluated on both synthetic and empirical datasets, CITS significantly outperforms state-of-the-art approaches: it successfully identifies the visually driven cortical–thalamo–hippocampal causal pathway and reveals that functionally similar neurons exhibit higher propensity for causal connectivity.

Technology Category

Application Category

📝 Abstract
Understanding how signals propagate through neural circuits is central to deciphering brain computation. While functional connectivity captures statistical associations, it does not reveal directionality or causal mechanisms. We introduce CITS (Causal Inference in Time Series), a non-parametric method for inferring statistically causal neural circuitry from high-resolution time series data. CITS models neural dynamics using a structural causal model with arbitrary Markov order and tests for time-lagged conditional independence using either Gaussian or distribution-free statistics. Unlike classical Granger Causality, which assumes linear autoregressive models and Gaussian noise, or the Peter-Clark algorithm, which assumes i.i.d. data and no temporal structure, CITS handles temporally dependent, potentially non-Gaussian data with flexible testing procedures. We prove consistency under mild mixing assumptions and validate CITS on simulated linear, nonlinear, and continuous-time recurrent neural network data, where it outperforms state-of-the-art methods. We then apply CITS to Neuropixels recordings from mouse brain during visual tasks. CITS uncovers interpretable, stimulus-specific causal circuits linking cortical, thalamic, and hippocampal regions, consistent with experimental literature. It also reveals that neurons with similar orientation selectivity indices are more likely to be causally connected. Our results demonstrate the utility of CITS in uncovering biologically meaningful pathways and generating hypotheses for future experimental studies.
Problem

Research questions and friction points this paper is trying to address.

Infer causal neural circuitry from high-resolution time series
Handle non-Gaussian, temporally dependent neural data flexibly
Uncover stimulus-specific causal circuits in brain regions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nonparametric causal modeling for neural time series
Handles temporally dependent non-Gaussian data
Flexible testing with Gaussian or distribution-free statistics
🔎 Similar Papers
No similar papers found.