Weighted Average Gradients for Feature Attribution

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Expected Gradients (EG) suffers from unstable and less interpretable attributions due to its uniform averaging over baseline inputs. Method: We propose Weighted Average Gradients (WAG), the first unsupervised baseline suitability evaluation mechanism within the integrated gradients framework, which dynamically learns baseline-specific weights and adaptively aggregates gradients. Contribution/Results: WAG theoretically satisfies key interpretability axioms—including completeness and sensitivity—and enables input-adaptive selection of compact, high-fidelity baseline subsets, balancing accuracy and efficiency. Extensive multi-task experiments demonstrate that WAG improves core attribution metrics by 10–35% over EG, significantly enhances explanation stability, and reduces computational overhead by approximately 40%.

Technology Category

Application Category

📝 Abstract
In explainable AI, Integrated Gradients (IG) is a widely adopted technique for assessing the significance of feature attributes of the input on model outputs by evaluating contributions from a baseline input to the current input. The choice of the baseline input significantly influences the resulting explanation. While the traditional Expected Gradients (EG) method assumes baselines can be uniformly sampled and averaged with equal weights, this study argues that baselines should not be treated equivalently. We introduce Weighted Average Gradients (WG), a novel approach that unsupervisedly evaluates baseline suitability and incorporates a strategy for selecting effective baselines. Theoretical analysis demonstrates that WG satisfies essential explanation method criteria and offers greater stability than prior approaches. Experimental results further confirm that WG outperforms EG across diverse scenarios, achieving an improvement of 10-35% on main metrics. Moreover, by evaluating baselines, our method can filter a subset of effective baselines for each input to calculate explanations, maintaining high accuracy while reducing computational cost. The code is available at: https://github.com/Tamnt240904/weighted_baseline.
Problem

Research questions and friction points this paper is trying to address.

Improving feature attribution by weighting baseline inputs differently
Unsupervised evaluation of baseline suitability for accurate explanations
Reducing computational cost while maintaining explanation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weighted Average Gradients for baseline selection
Unsupervised evaluation of baseline suitability
Filters effective baselines to reduce computation
🔎 Similar Papers
No similar papers found.
K
Kien Tran Duc Tuan
School of Information Technology and Communication, Hanoi University of Science and Technology
T
Tam Nguyen Trong
School of Information Technology and Communication, Hanoi University of Science and Technology
S
Son Nguyen Hoang
School of Information Technology and Communication, Hanoi University of Science and Technology
Khoat Than
Khoat Than
Hanoi University of Science and Technology
Machine LearningData Mining
A
A. Duc
School of Information Technology and Communication, Hanoi University of Science and Technology