π€ AI Summary
This work addresses the high computational complexity of Shapley value estimation in explainable AI, which hinders its scalable application to top-k feature identification. To this end, we propose the Comparable Marginal Contributions Sampling (CMCS) frameworkβthe first to integrate reverse sampling with multi-armed bandit principles, augmented by counterfactual sampling, covariance reduction, and confidence-aware sequential decision-making. By leveraging observed feature correlations, CMCS substantially reduces estimation variance. Empirical evaluation across multiple benchmark datasets shows that CMCS reduces sampling requirements by 40% on average compared to state-of-the-art methods, while improving top-k feature identification accuracy by 12β28%. Crucially, our analysis demonstrates that the objectives of approximate-all attribution and top-k identification are fundamentally distinct and not interchangeable. This work establishes a new paradigm for efficient and precise local feature attribution.
π Abstract
Additive feature explanations rely primarily on game-theoretic notions such as the Shapley value by viewing features as cooperating players. The Shapley value's popularity in and outside of explainable AI stems from its axiomatic uniqueness. However, its computational complexity severely limits practicability. Most works investigate the uniform approximation of all features' Shapley values, needlessly consuming samples for insignificant features. In contrast, identifying the $k$ most important features can already be sufficiently insightful and yields the potential to leverage algorithmic opportunities connected to the field of multi-armed bandits. We propose Comparable Marginal Contributions Sampling (CMCS), a method for the top-$k$ identification problem utilizing a new sampling scheme taking advantage of correlated observations. We conduct experiments to showcase the efficacy of our method in compared to competitive baselines. Our empirical findings reveal that estimation quality for the approximate-all problem does not necessarily transfer to top-$k$ identification and vice versa.