🤖 AI Summary
Chess engine evaluations, while highly accurate, lack interpretability—obscuring individual pieces’ contributions to position scores. This paper introduces SHAP (SHapley Additive exPlanations) to chess analysis for the first time, modeling board states as piece-wise feature vectors and computing each piece’s marginal contribution via systematic ablation. The resulting attribution is fine-grained, locally faithful, and intuitively interpretable. Unlike conventional black-box analysis, our approach grounds classical chess evaluation in explainable AI (XAI) theory. Experiments yield per-piece attribution maps, enabling visual interpretation, human player training, and cross-engine comparative analysis. To foster reproducibility and community advancement, we release all code and datasets. This work establishes a principled XAI framework for chess, bridging algorithmic evaluation with human-understandable reasoning.
📝 Abstract
Contemporary chess engines offer precise yet opaque evaluations, typically expressed as centipawn scores. While effective for decision-making, these outputs obscure the underlying contributions of individual pieces or patterns. In this paper, we explore adapting SHAP (SHapley Additive exPlanations) to the domain of chess analysis, aiming to attribute a chess engines evaluation to specific pieces on the board. By treating pieces as features and systematically ablating them, we compute additive, per-piece contributions that explain the engines output in a locally faithful and human-interpretable manner. This method draws inspiration from classical chess pedagogy, where players assess positions by mentally removing pieces, and grounds it in modern explainable AI techniques. Our approach opens new possibilities for visualization, human training, and engine comparison. We release accompanying code and data to foster future research in interpretable chess AI.