RankingSHAP -- Listwise Feature Attribution Explanations for Ranking Models

📅 2024-03-24
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing feature attribution methods (e.g., SHAP) in information retrieval provide only document-level pointwise explanations, failing to capture inter-document relative ranking relationships within a ranked list. Method: This paper formally defines the list-level feature attribution problem and proposes a Shapley-value-based theoretical framework for joint attribution over entire ranking outputs. We introduce two novel evaluation paradigms to assess attribution correctness and completeness; identify contrastive decision-making as a fundamental constraint on attribution design; develop LTR-model-adapted attribution algorithms; and propose explanation-driven qualitative validation techniques. Results: Experiments on standard LTR benchmarks demonstrate that our method precisely identifies features governing relative document positioning, overcoming inherent limitations of selection-based explanations and significantly enhancing the interpretability of ranking models.

Technology Category

Application Category

📝 Abstract
While SHAP (SHapley Additive exPlanations) and other feature attribution methods are commonly employed to explain model predictions, their application within information retrieval (IR), particularly for complex outputs such as ranked lists, remains limited. Existing attribution methods typically provide pointwise explanations, focusing on why a single document received a high-ranking score, rather than considering the relationships between documents in a ranked list. We present three key contributions to address this gap. First, we rigorously define listwise feature attribution for ranking models. Secondly, we introduce RankingSHAP, extending the popular SHAP framework to accommodate listwise ranking attribution, addressing a significant methodological gap in the field. Third, we propose two novel evaluation paradigms for assessing the faithfulness of attributions in learning-to-rank models, measuring the correctness and completeness of the explanation with respect to different aspects. Through experiments on standard learning-to-rank datasets, we demonstrate RankingSHAP's practical application while identifying the constraints of selection-based explanations. We further employ a simulated study with an interpretable model to showcase how listwise ranking attributions can be used to examine model decisions and conduct a qualitative evaluation of explanations. Due to the contrastive nature of the ranking task, our understanding of ranking model decisions can substantially benefit from feature attribution explanations like RankingSHAP.
Problem

Research questions and friction points this paper is trying to address.

Extends SHAP to explain ranked list outputs in IR
Addresses lack of listwise attribution for ranking models
Proposes new evaluation methods for explanation faithfulness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends SHAP for listwise ranking attribution
Defines listwise feature attribution rigorously
Proposes novel evaluation paradigms for faithfulness
🔎 Similar Papers
No similar papers found.