🤖 AI Summary
Integrated Gradients (IG) suffers from high attribution noise and substantial computational overhead due to low-precision Riemann sum approximations. To address this, we propose RiemannOpt—a novel framework that, for the first time, formulates Riemann integration point selection as a learnable optimization problem, unifying support for IG and its variants (e.g., Blur IG, Guided IG). Our method integrates gradient-guided adaptive sampling, explicit numerical integration error modeling, and end-to-end training driven by the Insertion Score metric. Extensive experiments demonstrate that RiemannOpt improves Insertion Score by up to 20% on mainstream deep learning models while reducing computational cost to as low as 25% of standard IG. This yields significantly enhanced attribution fidelity and robustness, markedly improving deployment feasibility in resource-constrained environments—such as edge devices—without sacrificing interpretability or accuracy.
📝 Abstract
Integrated Gradients (IG) is a widely used algorithm for attributing the outputs of a deep neural network to its input features. Due to the absence of closed-form integrals for deep learning models, inaccurate Riemann Sum approximations are used to calculate IG. This often introduces undesirable errors in the form of high levels of noise, leading to false insights in the model's decision-making process. We introduce a framework, RiemannOpt, that minimizes these errors by optimizing the sample point selection for the Riemann Sum. Our algorithm is highly versatile and applicable to IG as well as its derivatives like Blur IG and Guided IG. RiemannOpt achieves up to 20% improvement in Insertion Scores. Additionally, it enables its users to curtail computational costs by up to four folds, thereby making it highly functional for constrained environments.