🤖 AI Summary
This work addresses the fundamental challenge of preserving causal structure during abstraction from low-level to high-level models—a key issue in scientific modeling, causal inference, and interpretable AI. Leveraging category theory, it formalizes causal abstraction as natural transformations, unifying existing frameworks and distinguishing upward from downward abstraction, with the latter—centered on high-level query mappings—shown to be more foundational. The paper introduces the novel notion of component-wise abstraction to strengthen mechanistic, constructive causal modeling and extends this framework for the first time to quantum combinatorial circuits, opening new avenues for interpretable quantum AI. By integrating intervention semantics and query mappings within compositional models in monoidal, cd-, or Markov categories, it establishes a unified theory of causal abstraction, proves a characterization theorem for mechanism-level abstraction, and demonstrates an initial abstraction mapping between classical causal models and quantum circuits.
📝 Abstract
Abstracting from a low level to a more explanatory high level of description, and ideally while preserving causal structure, is fundamental to scientific practice, to causal inference problems, and to robust, efficient and interpretable AI. We present a general account of abstractions between low and high level models as natural transformations, focusing on the case of causal models. This provides a new formalisation of causal abstraction, unifying several notions in the literature, including constructive causal abstraction, Q-$τ$ consistency, abstractions based on interchange interventions, and `distributed' causal abstractions. Our approach is formalised in terms of category theory, and uses the general notion of a compositional model with a given set of queries and semantics in a monoidal, cd- or Markov category; causal models and their queries such as interventions being special cases. We identify two basic notions of abstraction: downward abstractions mapping queries from high to low level; and upward abstractions, mapping concrete queries such as Do-interventions from low to high. Although usually presented as the latter, we show how common causal abstractions may, more fundamentally, be understood in terms of the former. Our approach also leads us to consider a new stronger notion of `component-level' abstraction, applying to the individual components of a model. In particular, this yields a novel, strengthened form of constructive causal abstraction at the mechanism-level, for which we prove characterisation results. Finally, we show that abstraction can be generalised to further compositional models, including those with a quantum semantics implemented by quantum circuits, and we take first steps in exploring abstractions between quantum compositional circuit models and high-level classical causal models as a means to explainable quantum AI.