Universal Algorithm-Implicit Learning

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing meta-learning approaches are constrained by fixed feature and label spaces and lack a precise definition of "generality," limiting their applicability and comparability. This work proposes a theoretical framework that formally defines practical generality and distinguishes between explicit and implicit learning in meta-learners. Building upon this foundation, we introduce TAIL, a Transformer-based implicit meta-learner that employs random projections for cross-modal feature encoding, leverages random label embeddings to generalize to substantially larger label spaces, and incorporates an efficient inline query mechanism. TAIL achieves state-of-the-art performance on standard few-shot benchmarks, demonstrates strong generalization to unseen domains and modalities—such as performing text classification after training solely on images—and scales to tasks with over 20 times more training classes while significantly reducing computational overhead.

Technology Category

Application Category

📝 Abstract
Current meta-learning methods are constrained to narrow task distributions with fixed feature and label spaces, limiting applicability. Moreover, the current meta-learning literature uses key terms like"universal"and"general-purpose"inconsistently and lacks precise definitions, hindering comparability. We introduce a theoretical framework for meta-learning which formally defines practical universality and introduces a distinction between algorithm-explicit and algorithm-implicit learning, providing a principled vocabulary for reasoning about universal meta-learning methods. Guided by this framework, we present TAIL, a transformer-based algorithm-implicit meta-learner that functions across tasks with varying domains, modalities, and label configurations. TAIL features three innovations over prior transformer-based meta-learners: random projections for cross-modal feature encoding, random injection label embeddings that extrapolate to larger label spaces, and efficient inline query processing. TAIL achieves state-of-the-art performance on standard few-shot benchmarks while generalizing to unseen domains. Unlike other meta-learning methods, it also generalizes to unseen modalities, solving text classification tasks despite training exclusively on images, handles tasks with up to 20$\times$ more classes than seen during training, and provides orders-of-magnitude computational savings over prior transformer-based approaches.
Problem

Research questions and friction points this paper is trying to address.

meta-learning
universal learning
task distribution
feature space
label space
Innovation

Methods, ideas, or system contributions that make the work stand out.

algorithm-implicit learning
universal meta-learning
cross-modal generalization
random projection
inline query processing