Rigorous Explanations for Tree Ensembles

πŸ“… 2026-03-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Tree ensemble models, such as random forests and gradient-boosted trees, often suffer from limited interpretability, undermining user trust. This work proposes the first rigorous logical explanation framework tailored specifically for tree ensembles, integrating formal verification techniques with intrinsic structural properties of decision trees to construct explanations that precisely capture the model’s actual decision behavior. The approach formally guarantees correctness and faithfulness of the generated explanations, yielding verifiable and high-fidelity justifications for individual predictions. By ensuring that explanations are both logically sound and aligned with the model’s true reasoning process, the method significantly enhances the transparency and credibility of tree-based ensemble models.
πŸ“ Abstract
Tree ensembles (TEs) find a multitude of practical applications. They represent one of the most general and accurate classes of machine learning methods. While they are typically quite concise in representation, their operation remains inscrutable to human decision makers. One solution to build trust in the operation of TEs is to automatically identify explanations for the predictions made. Evidently, we can only achieve trust using explanations, if those explanations are rigorous, that is truly reflect properties of the underlying predictor they explain This paper investigates the computation of rigorously-defined, logically-sound explanations for the concrete case of two well-known examples of tree ensembles, namely random forests and boosted trees.
Problem

Research questions and friction points this paper is trying to address.

Tree Ensembles
Explainability
Rigorous Explanations
Random Forests
Boosted Trees
Innovation

Methods, ideas, or system contributions that make the work stand out.

rigorous explanations
tree ensembles
logically-sound
random forests
boosted trees