🤖 AI Summary
This work addresses the limited cross-task generalization capability of models in meta-learning. We propose ConML, a general framework that—uniquely—extends contrastive learning from representation space to *model space*, using task identity as supervision: it pulls together models trained on different subsets of the same task while pushing apart models trained on distinct tasks, all within the parameter space. ConML is architecture- and paradigm-agnostic, seamlessly integrating optimization-based, metric-based, variational, and context-learning meta-learning approaches. Extensive experiments demonstrate that ConML consistently improves performance across diverse few-shot benchmarks, achieving both broad applicability and empirical effectiveness. By introducing contrastive regularization directly over model parameters, ConML establishes a novel perspective on model-space regularization in meta-learning.
📝 Abstract
Meta-learning enables learning systems to adapt quickly to new tasks, similar to humans. To emulate this human-like rapid learning and enhance alignment and discrimination abilities, we propose ConML, a universal meta-learning framework that can be applied to various meta-learning algorithms without relying on specific model architectures nor target models. The core of ConML is task-level contrastive learning, which extends contrastive learning from the representation space in unsupervised learning to the model space in meta-learning. By leveraging task identity as an additional supervision signal during meta-training, we contrast the outputs of the meta-learner in the model space, minimizing inner-task distance (between models trained on different subsets of the same task) and maximizing inter-task distance (between models from different tasks). We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms, as well as in-context learning, resulting in performance improvements across diverse few-shot learning tasks.