Antithetic Sampling for Top-k Shapley Identification

πŸ“… 2025-04-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high computational complexity of Shapley value estimation in explainable AI, which hinders its scalable application to top-k feature identification. To this end, we propose the Comparable Marginal Contributions Sampling (CMCS) frameworkβ€”the first to integrate reverse sampling with multi-armed bandit principles, augmented by counterfactual sampling, covariance reduction, and confidence-aware sequential decision-making. By leveraging observed feature correlations, CMCS substantially reduces estimation variance. Empirical evaluation across multiple benchmark datasets shows that CMCS reduces sampling requirements by 40% on average compared to state-of-the-art methods, while improving top-k feature identification accuracy by 12–28%. Crucially, our analysis demonstrates that the objectives of approximate-all attribution and top-k identification are fundamentally distinct and not interchangeable. This work establishes a new paradigm for efficient and precise local feature attribution.

Technology Category

Application Category

πŸ“ Abstract
Additive feature explanations rely primarily on game-theoretic notions such as the Shapley value by viewing features as cooperating players. The Shapley value's popularity in and outside of explainable AI stems from its axiomatic uniqueness. However, its computational complexity severely limits practicability. Most works investigate the uniform approximation of all features' Shapley values, needlessly consuming samples for insignificant features. In contrast, identifying the $k$ most important features can already be sufficiently insightful and yields the potential to leverage algorithmic opportunities connected to the field of multi-armed bandits. We propose Comparable Marginal Contributions Sampling (CMCS), a method for the top-$k$ identification problem utilizing a new sampling scheme taking advantage of correlated observations. We conduct experiments to showcase the efficacy of our method in compared to competitive baselines. Our empirical findings reveal that estimation quality for the approximate-all problem does not necessarily transfer to top-$k$ identification and vice versa.
Problem

Research questions and friction points this paper is trying to address.

Efficiently identify top-k important features
Reduce computational complexity of Shapley values
Improve sampling for correlated feature contributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes antithetic sampling for efficiency
Focuses on top-k Shapley values identification
Employs Comparable Marginal Contributions Sampling (CMCS)
πŸ”Ž Similar Papers
Patrick Kolpaczki
Patrick Kolpaczki
PhD. Candidate, Ludwig Maximilian University of Munich
Shapley ValueCooperative GamesGame TheoryExplainable AI
T
Tim Nielen
LMU Munich
E
Eyke Hullermeier
LMU Munich, Munich Center for Machine Learning