Domain-Generalization to Improve Learning in Meta-Learning Algorithms

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak cross-task generalization and insufficient robustness of meta-learning models in few-shot learning scenarios, this paper proposes DGS-MAML—a meta-learning algorithm integrating domain generalization and sharpness-aware optimization. Within a bilevel optimization framework, DGS-MAML jointly models gradient matching (to improve task adaptability) and sharpness minimization (to enhance parameter robustness). It provides, for the first time for MAML-style methods, PAC-Bayes generalization bounds and convergence guarantees. Extensive experiments on multiple few-shot benchmark datasets demonstrate that DGS-MAML consistently outperforms mainstream approaches—including MAML and MetaReg—with average accuracy gains of 2.3–5.1%. Moreover, it exhibits superior robustness under distributional shift. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
This paper introduces Domain Generalization Sharpness-Aware Minimization Model-Agnostic Meta-Learning (DGS-MAML), a novel meta-learning algorithm designed to generalize across tasks with limited training data. DGS-MAML combines gradient matching with sharpness-aware minimization in a bi-level optimization framework to enhance model adaptability and robustness. We support our method with theoretical analysis using PAC-Bayes and convergence guarantees. Experimental results on benchmark datasets show that DGS-MAML outperforms existing approaches in terms of accuracy and generalization. The proposed method is particularly useful for scenarios requiring few-shot learning and quick adaptation, and the source code is publicly available at GitHub.
Problem

Research questions and friction points this paper is trying to address.

Improves generalization in meta-learning with limited data
Enhances model adaptability and robustness across tasks
Optimizes few-shot learning and quick adaptation scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines gradient matching and sharpness-aware minimization
Uses bi-level optimization for adaptability
Theoretically analyzed with PAC-Bayes guarantees
🔎 Similar Papers
No similar papers found.
Usman Anjum
Usman Anjum
Assistant Professor, Ottawa University
AI and machine learning
C
Chris Stockman
Department of Computer Science, University of Cincinnati, Cincinnati, Ohio, 45221, USA
Cat Luong
Cat Luong
University of Cincinnati
Natural Language ProcessingMachine LearningDeep Learning
J
Justin Zhan
Department of Computer Science, University of Cincinnati, Cincinnati, Ohio, 45221, USA