🤖 AI Summary
This work addresses the lack of a unified semantic foundation for observation modeling, Bayesian updating, normalization, and both Pearl-style (causal intervention) and Jeffrey-style (soft evidence) conditioning in probabilistic reasoning. Methodologically, it introduces *partial Markov categories*—a novel categorical structure obtained by the first systematic integration of Markov categories with Cartesian restriction categories—enabling purely categorical, axiomatized characterizations of observations, Bayes’ theorem, normalization, and both update paradigms. The key contribution is the construction of the first compositional framework that uniformly supports both causal interventions (à la Pearl) and Jeffrey-style belief updates. Crucially, all central concepts—including conditioning, normalization, and update rules—emerge naturally from the categorical structure itself, without requiring external probabilistic interpretations. This provides a compositional, abstraction-layered semantic foundation for probabilistic programming languages and causal inference systems.
📝 Abstract
We introduce partial Markov categories as a synthetic framework for synthetic probabilistic inference, blending the work of Cho and Jacobs, Fritz, and Golubtsov on Markov categories with the work of Cockett and Lack on cartesian restriction categories. We describe observations, Bayes' theorem, normalisation, and both Pearl's and Jeffrey's updates in purely categorical terms.