🤖 AI Summary
Existing neural causal models predominantly assume static causal graphs, limiting their ability to capture dynamic causal interactions in sequential visual observations.
Method: This paper proposes a dynamic causal process inference framework for visual observations, framing attention mechanisms as a reinforcement learning (RL) problem guided by causal process theory. It embeds causal graph structure learning within a multi-agent RL paradigm, where Transformer-based attention modules and RL agents jointly discover and model time-varying causal connections among units in an interpretable manner.
Contribution/Results: The method advances state-of-the-art performance in both causal representation learning and agent decision-making. Crucially, it accurately recovers time-varying causal graph structures—achieving, for the first time, end-to-end, interpretable learning of dynamic causal processes from raw visual inputs.
📝 Abstract
Formal frameworks of causality have operated largely parallel to modern trends in deep reinforcement learning (RL). However, there has been a revival of interest in formally grounding the representations learned by neural networks in causal concepts. Yet, most attempts at neural models of causality assume static causal graphs and ignore the dynamic nature of causal interactions. In this work, we introduce Causal Process framework as a novel theory for representing dynamic hypotheses about causal structure. Furthermore, we present Causal Process Model as an implementation of this framework. This allows us to reformulate the attention mechanism popularized by Transformer networks within an RL setting with the goal to infer interpretable causal processes from visual observations. Here, causal inference corresponds to constructing a causal graph hypothesis which itself becomes an RL task nested within the original RL problem. To create an instance of such hypothesis, we employ RL agents. These agents establish links between units similar to the original Transformer attention mechanism. We demonstrate the effectiveness of our approach in an RL environment where we outperform current alternatives in causal representation learning and agent performance, and uniquely recover graphs of dynamic causal processes.