Efficiently Learning Probabilistic Logical Models by Cheaply Ranking Mined Rules

๐Ÿ“… 2024-09-24
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high computational cost, poor scalability, and weak interpretability inherent in automated construction of probabilistic logic models, this paper proposes an efficient and scalable learning framework. Methodologically, we first introduce a novel joint precisionโ€“recall utility metric; design a linear-time recursive subgraph mining algorithm; and develop a utility-driven rule ranking mechanism with a theoretically provable lower bound on utility. Our implementation leverages relational graph representations and the SPECTRUM learning architecture. Experimental results demonstrate that our approach consumes less than 1% of the CPU time required by state-of-the-art neural networks running on GPU; scales to substantially larger datasets; and yields learned logical theories with significantly higher accuracy than all baselines. The core contribution is a new paradigm for probabilistic logic learning that simultaneously achieves high efficiency, strong interpretability, and rigorous theoretical guarantees.

Technology Category

Application Category

๐Ÿ“ Abstract
Probabilistic logical models are a core component of neurosymbolic AI and are important in their own right for tasks that require high explainability. Unlike neural networks, logical theories that underlie the model are often handcrafted using domain expertise, making their development costly and prone to errors. While there are algorithms that learn logical theories from data, they are generally prohibitively expensive, limiting their applicability in real-world settings. Here, we introduce precision and recall for logical rules and define their composition as rule utility -- a cost-effective measure of the predictive power of logical theories. We also introduce SPECTRUM, a scalable framework for learning logical theories from relational data. Its scalability derives from a linear-time algorithm that mines recurrent subgraphs in the data graph along with a second algorithm that, using the cheap utility measure, efficiently ranks rules derived from these subgraphs. Finally, we prove theoretical guarantees on the utility of the learnt logical theory. As a result, we demonstrate across various tasks that SPECTRUM scales to larger datasets, often learning more accurate logical theories on CPUs in<1% the runtime of SOTA neural network approaches on GPUs.
Problem

Research questions and friction points this paper is trying to address.

Develop cost-effective probabilistic logical models
Improve scalability of learning logical theories
Enhance accuracy and efficiency in neurosymbolic AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces rule utility for cost-effective logical theory evaluation
Develops SPECTRUM for scalable logical theory learning
Uses linear-time algorithms for efficient rule mining and ranking
๐Ÿ”Ž Similar Papers
No similar papers found.