TT-Sparse: Learning Sparse Rule Models with Differentiable Truth Tables

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing predictive performance and interpretability in rule-based models for high-stakes decision-making by proposing a neuro-symbolic approach based on differentiable truth tables. The method introduces a novel soft TopK operator combined with straight-through estimation to enable end-to-end sparse feature selection under cardinality constraints, while preserving forward sparsity to support exact symbolic rule extraction. Furthermore, it integrates the Quine-McCluskey algorithm for rule minimization. Experimental results across 28 benchmark datasets demonstrate that the learned rules significantly outperform state-of-the-art methods in both prediction accuracy and model complexity.

Technology Category

Application Category

📝 Abstract
Interpretable machine learning is essential in high-stakes domains where decision-making requires accountability, transparency, and trust. While rule-based models offer global and exact interpretability, learning rule sets that simultaneously achieve high predictive performance and low, human-understandable complexity remains challenging. To address this, we introduce TT-Sparse, a flexible neural building block that leverages differentiable truth tables as nodes to learn sparse, effective connections. A key contribution of our approach is a new soft TopK operator with straight-through estimation for learning discrete, cardinality-constrained feature selection in an end-to-end differentiable manner. Crucially, the forward pass remains sparse, enabling efficient computation and exact symbolic rule extraction. As a result, each node (and the entire model) can be transformed exactly into compact, globally interpretable DNF/CNF Boolean formulas via Quine-McCluskey minimization. Extensive empirical results across 28 datasets spanning binary, multiclass, and regression tasks show that the learned sparse rules exhibit superior predictive performance with lower complexity compared to existing state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

interpretable machine learning
rule-based models
sparse rule learning
model complexity
truth tables
Innovation

Methods, ideas, or system contributions that make the work stand out.

differentiable truth tables
soft TopK operator
sparse rule learning
symbolic rule extraction
global interpretability
🔎 Similar Papers
No similar papers found.
H
Hans Farrell Soegeng
School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore
S
Sarthak Ketanbhai Modi
School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore
Thomas Peyrin
Thomas Peyrin
Professor, Nanyang Technological University
CryptographyCryptanalysisInformation Security