Continual learning and refinement of causal models through dynamic predicate invention

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently modeling agent behavior in complex environments, where standard world models suffer from low sample efficiency, poor interpretability, and limited scalability. To overcome these limitations, the authors propose a novel method for online construction of symbolic, hierarchical causal world models. By integrating continuous causal learning and a repair mechanism into the decision loop—combined with meta-explanatory learning and dynamic predicate invention—the approach enables, for the first time, the online generation of semantically explicit and reusable abstract predicates. This facilitates the formation of a disentangled high-level conceptual hierarchy and supports continual model refinement. Empirical results demonstrate that the method substantially outperforms neural baselines such as PPO in complex relational dynamic environments, achieving orders-of-magnitude gains in sample efficiency while effectively avoiding the combinatorial explosion inherent in propositional approaches.

Technology Category

Application Category

📝 Abstract
Efficiently navigating complex environments requires agents to internalize the underlying logic of their world, yet standard world modelling methods often struggle with sample inefficiency, lack of transparency, and poor scalability. We propose a framework for constructing symbolic causal world models entirely online by integrating continuous model learning and repair into the agent's decision loop, by leveraging the power of Meta-Interpretive Learning and predicate invention to find semantically meaningful and reusable abstractions, allowing an agent to construct a hierarchy of disentangled, high-quality concepts from its observations. We demonstrate that our lifted inference approach scales to domains with complex relational dynamics, where propositional methods suffer from combinatorial explosion, while achieving sample-efficiency orders of magnitude higher than the established PPO neural-network-based baseline.
Problem

Research questions and friction points this paper is trying to address.

continual learning
causal models
sample inefficiency
scalability
transparency
Innovation

Methods, ideas, or system contributions that make the work stand out.

continual learning
causal models
predicate invention
Meta-Interpretive Learning
symbolic reasoning
🔎 Similar Papers
No similar papers found.