Pushing the Boundaries of Interpretability: Incremental Enhancements to the Explainable Boosting Machine

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient trade-off between transparency and accuracy in Explainable Boosting Machines (EBMs) for high-stakes domains, this paper proposes a triple-coordinated optimization framework: (1) Bayesian hyperparameter optimization to enhance generalization; (2) a fairness-aware multi-objective loss function jointly optimizing accuracy, group fairness, and individual robustness; and (3) a self-supervised pretraining-based cold-start strategy to improve interpretability stability under low-data regimes and distribution shift. Experiments on the Adult, credit card fraud, and UCI Heart Disease datasets demonstrate that our method maintains state-of-the-art (SOTA) accuracy while significantly improving decision fairness (ΔSPD ≤ −0.02), robustness (38% reduction in AUC fluctuation under adversarial perturbations), and explanation consistency. This work establishes a reproducible and verifiable technical pathway for the responsible deployment of glass-box models.

Technology Category

Application Category

📝 Abstract
The widespread adoption of complex machine learning models in high-stakes domains has brought the "black-box" problem to the forefront of responsible AI research. This paper aims at addressing this issue by improving the Explainable Boosting Machine (EBM), a state-of-the-art glassbox model that delivers both high accuracy and complete transparency. The paper outlines three distinct enhancement methodologies: targeted hyperparameter optimization with Bayesian methods, the implementation of a custom multi-objective function for fairness for hyperparameter optimization, and a novel self-supervised pre-training pipeline for cold-start scenarios. All three methodologies are evaluated across standard benchmark datasets, including the Adult Income, Credit Card Fraud Detection, and UCI Heart Disease datasets. The analysis indicates that while the tuning process yielded marginal improvements in the primary ROC AUC metric, it led to a subtle but important shift in the model's decision-making behavior, demonstrating the value of a multi-faceted evaluation beyond a single performance score. This work is positioned as a critical step toward developing machine learning systems that are not only accurate but also robust, equitable, and transparent, meeting the growing demands of regulatory and ethical compliance.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Explainable Boosting Machine for better interpretability and accuracy
Addressing black-box problem in high-stakes AI with transparent models
Improving fairness and robustness through multi-objective optimization methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian hyperparameter optimization for EBM
Multi-objective fairness function in tuning
Self-supervised pre-training for cold-start scenarios
🔎 Similar Papers
No similar papers found.