Small Models, Smarter Learning: The Power of Joint Task Training

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the minimal parameter count required for small Transformer models to learn nested mathematical operations (e.g., SUM, MAX, MED) in the ListOps benchmark, and how this relates to intrinsic task difficulty. Methodologically, it employs progressive difficulty scaling, multi-task joint training, and systematic ablation studies, complemented by embedding visualizations and module-wise activation tracking. Results show that multi-task training substantially lowers the learning threshold for challenging tasks like SUM: models too small to learn SUM in isolation—falling below the single-task capacity threshold—acquire robust, generalizable SUM capability after multi-task pretraining. Crucially, this work provides the first evidence that task composition induces *number sense* representations: models develop numerically structured embeddings, exhibit strong parity discrimination, and rely more heavily on attention mechanisms. These findings offer a novel perspective on the emergence of mathematical reasoning capabilities in resource-constrained neural models.

Technology Category

Application Category

📝 Abstract
The ability of a model to learn a task depends strongly on both the task difficulty and the model size. We aim to understand how task difficulty relates to the minimum number of parameters required for learning specific tasks in small transformer models. Our study focuses on the ListOps dataset, which consists of nested mathematical operations. We gradually increase task difficulty by introducing new operations or combinations of operations into the training data. We observe that sum modulo n is the hardest to learn. Curiously, when combined with other operations such as maximum and median, the sum operation becomes easier to learn and requires fewer parameters. We show that joint training not only improves performance but also leads to qualitatively different model behavior. We show evidence that models trained only on SUM might be memorizing and fail to capture the number structure in the embeddings. In contrast, models trained on a mixture of SUM and other operations exhibit number-like representations in the embedding space, and a strong ability to distinguish parity. Furthermore, the SUM-only model relies more heavily on its feedforward layers, while the jointly trained model activates the attention mechanism more. Finally, we show that learning pure SUM can be induced in models below the learning threshold of pure SUM, by pretraining them on MAX+MED. Our findings indicate that emergent abilities in language models depend not only on model size, but also the training curriculum.
Problem

Research questions and friction points this paper is trying to address.

Understand task difficulty vs. model size in small transformers
Study joint training's impact on learning nested math operations
Explore how training curriculum affects emergent model abilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint task training improves model performance
Pretraining on easier tasks enables harder learning
Attention mechanism activation varies by training
🔎 Similar Papers
No similar papers found.