🤖 AI Summary
Existing methods struggle to unbiasedly quantify directed information flow (e.g., transfer entropy) in stochastic systems featuring latent variables, nonlinearities, transients, and feedback, often relying on uncontrolled approximations or misapplying the data processing inequality. To address this, we propose TE-PWS—the first exact, approximation-free, controllable, and universally applicable transfer entropy algorithm—grounded in path-integral modeling, polymer-physics-inspired path-weighted sampling, and Monte Carlo averaging over trajectory space. TE-PWS overcomes fundamental theoretical limitations of classical information-theoretic tools in feedback systems, enabling the first unbiased computation of transfer entropy and its variants for arbitrary stochastic dynamical models. We validate TE-PWS across diverse linear and nonlinear systems, demonstrating its accuracy and revealing counterintuitive, feedback-induced reversals in information flow direction.
📝 Abstract
The ability to quantify the directional flow of information is vital to understanding natural systems and designing engineered information-processing systems. A widely used measure to quantify this information flow is the transfer entropy. However, until now, this quantity could only be obtained in dynamical models using approximations that are typically uncontrolled. Here we introduce a computational algorithm called Transfer Entropy-Path Weight Sampling (TE-PWS), which makes it possible, for the first time, to quantify the transfer entropy and its variants exactly for any stochastic model, including those with multiple hidden variables, nonlinearity, transient conditions, and feedback. By leveraging techniques from polymer and path sampling, TE-PWS efficiently computes the transfer entropy as a Monte-Carlo average over signal trajectory space. We apply TE-PWS to linear and nonlinear systems to reveal how transfer entropy can overcome naive applications of the data processing inequality in the presence of feedback.