🤖 AI Summary
This work addresses the lack of a purely logical characterization and formal semantic foundation for Bayesian inference. We propose a proof-theoretic modeling framework based on multivalued linear logic, establishing—for the first time—a rigorous correspondence between Bayesian networks and multiplicative proof nets: joint probability distributions are semantically encoded as proof structures, and Bayesian inference is reformulated as proof reduction. Leveraging categorical semantics, we uncover a computational isomorphism between probabilistic graphical models and structured proofs. The resulting framework provides a purely logical representation of Bayesian inference, enabling verifiable probabilistic computation. Moreover, it delivers the first formal semantics for probabilistic programming languages grounded in linear logic, thereby bridging the theoretical gap between probabilistic reasoning and logical computation.
📝 Abstract
We uncover a strong correspondence between Bayesian Networks and (Multiplicative) Linear Logic Proof-Nets, relating the two as a representation of a joint probability distribution and at the level of computation, so yielding a proof-theoretical account of Bayesian Inference.