ICE: Intervention-Consistent Explanation Evaluation with Statistical Grounding for LLMs

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of statistical rigor in existing explanation evaluation methods, which struggle to distinguish genuine faithfulness of model explanations from random chance. To this end, the authors propose the ICE framework, which introduces multiple intervention operators—such as deletion and replacement—combined with randomization tests. Faithfulness is quantified via win rates against matched random baselines, reported with confidence intervals. Extensive experiments across seven large language models, four English tasks, and six non-English languages reveal substantial discrepancies: faithfulness varies by up to 44% across operators, approximately one-third of configurations exhibit anti-faithful behavior, and faithfulness shows nearly no correlation with human-judged plausibility. These findings underscore critical limitations in current explanation methods and highlight the urgent need for a more principled evaluation paradigm.

Technology Category

Application Category

📝 Abstract
Evaluating whether explanations faithfully reflect a model's reasoning remains an open problem. Existing benchmarks use single interventions without statistical testing, making it impossible to distinguish genuine faithfulness from chance-level performance. We introduce ICE (Intervention-Consistent Explanation), a framework that compares explanations against matched random baselines via randomization tests under multiple intervention operators, yielding win rates with confidence intervals. Evaluating 7 LLMs across 4 English tasks, 6 non-English languages, and 2 attribution methods, we find that faithfulness is operator-dependent: operator gaps reach up to 44 percentage points, with deletion typically inflating estimates on short text but the pattern reversing on long text, suggesting that faithfulness should be interpreted comparatively across intervention operators rather than as a single score. Randomized baselines reveal anti-faithfulness in one-third of configurations, and faithfulness shows zero correlation with human plausibility (|r| < 0.04). Multilingual evaluation reveals dramatic model-language interactions not explained by tokenization alone. We release the ICE framework and ICEBench benchmark.
Problem

Research questions and friction points this paper is trying to address.

faithful explanation
intervention-based evaluation
statistical grounding
LLM interpretability
explanation evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intervention-Consistent Explanation
Statistical Grounding
Randomization Test
Faithfulness Evaluation
Multilingual LLMs
🔎 Similar Papers
No similar papers found.
A
Abhinaba Basu
Indian Institute of Information Technology, Allahabad (IIITA); National Institute of Electronics and Information Technology (NIELIT)
Pavan Chakraborty
Pavan Chakraborty
Indian Institute of Information Technology Allahabad
Artificial IntelligenceRobotics & Instrumentation