Uncertainty Quantification in the Tsetlin Machine

📅 2025-07-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Tsetlin Machines (TMs) lack intrinsic uncertainty quantification and suffer from limited interpretability. Method: We propose a learning-dynamics-based internal probability scoring mechanism that leverages statistical properties of rule activation trajectories—without external calibration or post-hoc processing—to generate distribution-consistent probabilistic scores. Contribution/Results: This mechanism naturally characterizes prediction confidence and significantly improves out-of-distribution (OOD) detection. Unlike neural networks, which often overconfidently extrapolate, our approach identifies low-confidence predictions by analyzing the logical evolution of TM rules, enhancing transparency and trustworthiness. The probabilistic scores are inherently visualizable for interpretation. Empirical evaluation on synthetic data confirms score calibration, while experiments on CIFAR-10 expose structural limitations of current TM architectures. Overall, this work advances interpretable AI by establishing a principled, self-contained uncertainty-aware framework for logic-based learning.

Technology Category

Application Category

📝 Abstract
Data modeling using Tsetlin machines (TMs) is all about building logical rules from the data features. The decisions of the model are based on a combination of these logical rules. Hence, the model is fully transparent and it is possible to get explanations of its predictions. In this paper, we present a probability score for TM predictions and develop new techniques for uncertainty quantification to increase the explainability further. The probability score is an inherent property of any TM variant and is derived through an analysis of the TM learning dynamics. Simulated data is used to show a clear connection between the learned TM probability scores and the underlying probabilities of the data. A visualization of the probability scores also reveals that the TM is less confident in its predictions outside the training data domain, which contrasts the typical extrapolation phenomenon found in Artificial Neural Networks. The paper concludes with an application of the uncertainty quantification techniques on an image classification task using the CIFAR-10 dataset, where they provide new insights and suggest possible improvements to current TM image classification models.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in Tsetlin Machine predictions
Enhance explainability through probability scores
Improve TM image classification model insights
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probability score derived from TM learning dynamics
Uncertainty quantification enhances TM explainability
Visualization shows TM confidence outside training domain
🔎 Similar Papers
No similar papers found.
R
Runar Helin
Department of IKT, University of Agder, Grimstad, Norway
Ole-Christoffer Granmo
Ole-Christoffer Granmo
Professor University of Agder
Machine Learning
M
Mayur Kishor Shende
Department of IKT, University of Agder, Grimstad, Norway
L
Lei Jiao
Department of IKT, University of Agder, Grimstad, Norway
V
Vladimir I. Zadorozhny
School of Computing and Information, University of Pittsburgh, Pittsburgh, PA, USA
K
Kunal Ganesh Dumbre
Department of IKT, University of Agder, Grimstad, Norway
Rishad Shafik
Rishad Shafik
Professor of Microelectronic Systems, Newcastle University, UK
Machine Learning HardwareEnergy-Aware ComputingHW/SW Co-design
A
Alex Yakovlev
School of Engineering, Newcastle University, Newcastle upon Tyne, UK