🤖 AI Summary
This work addresses the challenge of balancing predictive performance and interpretability in rule-based models for high-stakes decision-making by proposing a neuro-symbolic approach based on differentiable truth tables. The method introduces a novel soft TopK operator combined with straight-through estimation to enable end-to-end sparse feature selection under cardinality constraints, while preserving forward sparsity to support exact symbolic rule extraction. Furthermore, it integrates the Quine-McCluskey algorithm for rule minimization. Experimental results across 28 benchmark datasets demonstrate that the learned rules significantly outperform state-of-the-art methods in both prediction accuracy and model complexity.
📝 Abstract
Interpretable machine learning is essential in high-stakes domains where decision-making requires accountability, transparency, and trust. While rule-based models offer global and exact interpretability, learning rule sets that simultaneously achieve high predictive performance and low, human-understandable complexity remains challenging. To address this, we introduce TT-Sparse, a flexible neural building block that leverages differentiable truth tables as nodes to learn sparse, effective connections. A key contribution of our approach is a new soft TopK operator with straight-through estimation for learning discrete, cardinality-constrained feature selection in an end-to-end differentiable manner. Crucially, the forward pass remains sparse, enabling efficient computation and exact symbolic rule extraction. As a result, each node (and the entire model) can be transformed exactly into compact, globally interpretable DNF/CNF Boolean formulas via Quine-McCluskey minimization. Extensive empirical results across 28 datasets spanning binary, multiclass, and regression tasks show that the learned sparse rules exhibit superior predictive performance with lower complexity compared to existing state-of-the-art methods.