🤖 AI Summary
Computing expected outputs of stochastic reaction networks (SRNs) is a fundamental yet computationally intractable problem in chemical kinetics, epidemiology, and related fields: closed-form solutions rarely exist, conventional numerical methods (e.g., finite state projection, stochastic simulation algorithm) suffer from high computational cost, and existing deep learning approaches lack interpretability and theoretical guarantees. This paper introduces the first neural modeling framework for SRNs that simultaneously ensures interpretability and rigorous theoretical guarantees. Built upon mathematically transparent architectural design, it synergistically integrates sparse stochastic simulations, finite state projection, and Monte Carlo estimation to yield unbiased, provably convergent, and low-variance predictions. The framework generalizes across states, time horizons, and output functions. Evaluated on nine nonlinear, non-mass-action models, it achieves speedups of several orders of magnitude while accurately modeling complex systems with up to ten species.
📝 Abstract
Stochastic Reaction Networks (SRNs) are a fundamental modeling framework for systems ranging from chemical kinetics and epidemiology to ecological and synthetic biological processes. A central computational challenge is the estimation of expected outputs across initial conditions and times, a task that is rarely solvable analytically and becomes computationally prohibitive with current methods such as Finite State Projection or the Stochastic Simulation Algorithm. Existing deep learning approaches offer empirical scalability, but provide neither interpretability nor reliability guarantees, limiting their use in scientific analysis and in applications where model outputs inform real-world decisions. Here we introduce DeepSKA, a neural framework that jointly achieves interpretability, guaranteed reliability, and substantial computational gains. DeepSKA yields mathematically transparent representations that generalise across states, times, and output functions, and it integrates this structure with a small number of stochastic simulations to produce unbiased, provably convergent, and dramatically lower-variance estimates than classical Monte Carlo. We demonstrate these capabilities across nine SRNs, including nonlinear and non-mass-action models with up to ten species, where DeepSKA delivers accurate predictions and orders-of-magnitude efficiency improvements. This interpretable and reliable neural framework offers a principled foundation for developing analogous methods for other Markovian systems, including stochastic differential equations.