Interpretable Hybrid Machine Learning Models Using FOLD-R++ and Answer Set Programming

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-stakes domains such as healthcare, machine learning models often suffer from an accuracy–interpretability trade-off: black-box models achieve high accuracy but lack transparency, whereas symbolic methods offer interpretability at the cost of predictive performance. Method: This paper proposes a novel hybrid reasoning framework that synergistically couples Answer Set Programming (ASP) rules—automatically generated by FOLD-R++—with neural networks or ensemble models. The ASP rules selectively refine uncertain predictions of the black-box model, thereby enabling human-readable justifications and local model intervention. Contribution/Results: Evaluated on five medical datasets, the approach achieves statistically significant improvements in both accuracy and F1-score over baseline methods. It simultaneously preserves high predictive performance while delivering faithful, logically grounded explanations—effectively reconciling accuracy and interpretability in safety-critical applications.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) techniques play a pivotal role in high-stakes domains such as healthcare, where accurate predictions can greatly enhance decision-making. However, most high-performing methods such as neural networks and ensemble methods are often opaque, limiting trust and broader adoption. In parallel, symbolic methods like Answer Set Programming (ASP) offer the possibility of interpretable logical rules but do not always match the predictive power of ML models. This paper proposes a hybrid approach that integrates ASP-derived rules from the FOLD-R++ algorithm with black-box ML classifiers to selectively correct uncertain predictions and provide human-readable explanations. Experiments on five medical datasets reveal statistically significant performance gains in accuracy and F1 score. This study underscores the potential of combining symbolic reasoning with conventional ML to achieve high interpretability without sacrificing accuracy.
Problem

Research questions and friction points this paper is trying to address.

Combining symbolic reasoning with ML for interpretability without accuracy loss
Correcting uncertain predictions using ASP-derived rules from FOLD-R++
Enhancing trust in high-stakes domains via hybrid ML models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid FOLD-R++ and ASP for interpretability
Combines symbolic rules with black-box ML
Improves accuracy and F1 scores significantly
🔎 Similar Papers
No similar papers found.