🤖 AI Summary
To address the weak cross-task generalization and insufficient robustness of meta-learning models in few-shot learning scenarios, this paper proposes DGS-MAML—a meta-learning algorithm integrating domain generalization and sharpness-aware optimization. Within a bilevel optimization framework, DGS-MAML jointly models gradient matching (to improve task adaptability) and sharpness minimization (to enhance parameter robustness). It provides, for the first time for MAML-style methods, PAC-Bayes generalization bounds and convergence guarantees. Extensive experiments on multiple few-shot benchmark datasets demonstrate that DGS-MAML consistently outperforms mainstream approaches—including MAML and MetaReg—with average accuracy gains of 2.3–5.1%. Moreover, it exhibits superior robustness under distributional shift. The implementation is publicly available.
📝 Abstract
This paper introduces Domain Generalization Sharpness-Aware Minimization Model-Agnostic Meta-Learning (DGS-MAML), a novel meta-learning algorithm designed to generalize across tasks with limited training data. DGS-MAML combines gradient matching with sharpness-aware minimization in a bi-level optimization framework to enhance model adaptability and robustness. We support our method with theoretical analysis using PAC-Bayes and convergence guarantees. Experimental results on benchmark datasets show that DGS-MAML outperforms existing approaches in terms of accuracy and generalization. The proposed method is particularly useful for scenarios requiring few-shot learning and quick adaptation, and the source code is publicly available at GitHub.