DRESS: Disentangled Representation-based Self-Supervised Meta-Learning for Diverse Tasks

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance limitations of meta-learning in few-shot learning caused by insufficient task diversity, this paper proposes a task-agnostic self-supervised meta-learning framework. Methodologically, it integrates disentangled representation learning and self-supervised pretraining into the meta-training pipeline to generate diverse proxy tasks, thereby alleviating reliance on homogeneous task distributions. Its key contributions are: (1) a novel input-space class-partition-based metric for quantifying task diversity—enabling the first measurable assessment of meta-task distribution diversity; and (2) effective decoupling of semantic factors to enhance representation robustness and adaptability. Evaluated across multiple benchmark datasets exhibiting multi-factor variations and heterogeneous complexity, the method achieves state-of-the-art performance under standard few-shot settings (e.g., 5-way 1/5-shot). It significantly improves rapid adaptation capability, empirically validating both the efficacy and generalizability of disentangled-representation-driven meta-learning.

Technology Category

Application Category

📝 Abstract
Meta-learning represents a strong class of approaches for solving few-shot learning tasks. Nonetheless, recent research suggests that simply pre-training a generic encoder can potentially surpass meta-learning algorithms. In this paper, we first discuss the reasons why meta-learning fails to stand out in these few-shot learning experiments, and hypothesize that it is due to the few-shot learning tasks lacking diversity. We propose DRESS, a task-agnostic Disentangled REpresentation-based Self-Supervised meta-learning approach that enables fast model adaptation on highly diversified few-shot learning tasks. Specifically, DRESS utilizes disentangled representation learning to create self-supervised tasks that can fuel the meta-training process. Furthermore, we also propose a class-partition based metric for quantifying the task diversity directly on the input space. We validate the effectiveness of DRESS through experiments on datasets with multiple factors of variation and varying complexity. The results suggest that DRESS is able to outperform competing methods on the majority of the datasets and task setups. Through this paper, we advocate for a re-examination of proper setups for task adaptation studies, and aim to reignite interest in the potential of meta-learning for solving few-shot learning tasks via disentangled representations.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations of meta-learning in few-shot tasks
Proposes DRESS for diverse few-shot learning adaptation
Introduces metric to quantify task diversity in input space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled representation learning for self-supervised tasks
Task-agnostic meta-learning for diverse few-shot tasks
Class-partition metric to quantify task diversity
🔎 Similar Papers
No similar papers found.
W
Wei Cui
Layer 6 AI, Toronto, Canada
T
Tongzi Wu
Layer 6 AI, Toronto, Canada
Jesse C. Cresswell
Jesse C. Cresswell
Layer 6 AI
Trustworthy MLDeep Generative ModellingQuantum Information
Yi Sui
Yi Sui
Layer 6 AI
Self-supervised learningExplainabilityTrustworthy AI
K
Keyvan Golestan
Layer 6 AI, Toronto, Canada