TLXML: Task-Level Explanation of Meta-Learning via Influence Functions

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Meta-learning lacks interpretability in few-shot learning and distribution shift scenarios, hindering trust and diagnosis of meta-model behavior. Method: This paper introduces the first task-level attribution method that quantifies the influence of historical training tasks on both adaptation to and inference for novel tasks. It innovatively integrates influence functions into the bi-level optimization framework of meta-learning and constructs an efficient Hessian approximation using the Gauss–Newton matrix, thereby overcoming the computational bottleneck of second-order derivative computation. Contribution/Results: The method is compatible with mainstream meta-learners—including MAML and Prototypical Networks—and significantly improves task discriminability and task-distribution identification accuracy on image classification benchmarks. By enabling fine-grained attribution at the task level, it provides principled interpretability support for the reliable deployment of meta-models under distributional shifts and data scarcity.

Technology Category

Application Category

📝 Abstract
The scheme of adaptation via meta-learning is seen as an ingredient for solving the problem of data shortage or distribution shift in real-world applications, but it also brings the new risk of inappropriate updates of the model in the user environment, which increases the demand for explainability. Among the various types of XAI methods, establishing a method of explanation based on past experience in meta-learning requires special consideration due to its bi-level structure of training, which has been left unexplored. In this work, we propose influence functions for explaining meta-learning that measure the sensitivities of training tasks to adaptation and inference. We also argue that the approximation of the Hessian using the Gauss-Newton matrix resolves computational barriers peculiar to meta-learning. We demonstrate the adequacy of the method through experiments on task distinction and task distribution distinction using image classification tasks with MAML and Prototypical Network.
Problem

Research questions and friction points this paper is trying to address.

Meta-Learning
Model Interpretability
Adaptive Inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-Learning Interpretability
Influence Functions
Gauss-Newton Approximation
🔎 Similar Papers
No similar papers found.