🤖 AI Summary
In high-stakes domains such as healthcare, machine learning models often suffer from an accuracy–interpretability trade-off: black-box models achieve high accuracy but lack transparency, whereas symbolic methods offer interpretability at the cost of predictive performance.
Method: This paper proposes a novel hybrid reasoning framework that synergistically couples Answer Set Programming (ASP) rules—automatically generated by FOLD-R++—with neural networks or ensemble models. The ASP rules selectively refine uncertain predictions of the black-box model, thereby enabling human-readable justifications and local model intervention.
Contribution/Results: Evaluated on five medical datasets, the approach achieves statistically significant improvements in both accuracy and F1-score over baseline methods. It simultaneously preserves high predictive performance while delivering faithful, logically grounded explanations—effectively reconciling accuracy and interpretability in safety-critical applications.
📝 Abstract
Machine learning (ML) techniques play a pivotal role in high-stakes domains such as healthcare, where accurate predictions can greatly enhance decision-making. However, most high-performing methods such as neural networks and ensemble methods are often opaque, limiting trust and broader adoption. In parallel, symbolic methods like Answer Set Programming (ASP) offer the possibility of interpretable logical rules but do not always match the predictive power of ML models. This paper proposes a hybrid approach that integrates ASP-derived rules from the FOLD-R++ algorithm with black-box ML classifiers to selectively correct uncertain predictions and provide human-readable explanations. Experiments on five medical datasets reveal statistically significant performance gains in accuracy and F1 score. This study underscores the potential of combining symbolic reasoning with conventional ML to achieve high interpretability without sacrificing accuracy.