🤖 AI Summary
Modeling and interpreting low-dimensional manifold structures in high-dimensional neural time series remains challenging due to their intrinsic nonlinearity and temporal complexity.
Method: We propose a hierarchical stochastic differential equation (SDE) framework that innovatively integrates Brownian bridge SDEs with multivariate marked point processes to enable continuous, differentiable latent-space dynamics modeling. Sparse sampling coupled with an SDE-driven observation mapping ensures efficient manifold trajectory reconstruction.
Contribution/Results: The method achieves both interpretability and computational efficiency—its inference complexity scales linearly with sequence length, markedly outperforming conventional nonlinear dimensionality reduction and black-box deep SDE approaches. Extensive validation on synthetic benchmarks and real neural recordings—including electrophysiological data from macaque motor cortex—demonstrates accurate recovery of underlying manifold geometry and dynamics, alongside strong scalability.
📝 Abstract
The manifold hypothesis suggests that high-dimensional neural time series lie on a low-dimensional manifold shaped by simpler underlying dynamics. To uncover this structure, latent dynamical variable models such as state-space models, recurrent neural networks, neural ordinary differential equations, and Gaussian Process Latent Variable Models are widely used. We propose a novel hierarchical stochastic differential equation (SDE) model that balances computational efficiency and interpretability, addressing key limitations of existing methods. Our model assumes the trajectory of a manifold can be reconstructed from a sparse set of samples from the manifold trajectory. The latent space is modeled using Brownian bridge SDEs, with points - specified in both time and value - sampled from a multivariate marked point process. These Brownian bridges define the drift of a second set of SDEs, which are then mapped to the observed data. This yields a continuous, differentiable latent process capable of modeling arbitrarily complex time series as the number of manifold points increases. We derive training and inference procedures and show that the computational cost of inference scales linearly with the length of the observation data. We then validate our model on both synthetic data and neural recordings to demonstrate that it accurately recovers the underlying manifold structure and scales effectively with data dimensionality.