Towards Data-Efficient Language Models: A Child-Inspired Approach to Language Learning

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the low data efficiency of large language models (LLMs) by proposing a cognitively inspired, data-efficient modeling paradigm grounded in child language acquisition. Methodologically, we train a compact language model—using only 10 million tokens (8.5M from child-directed speech and 1.5M from television dialogues)—with a 32K-token vocabulary, curriculum learning, multimodal media corpus integration (BabyLM + TVR), and stringent child-centered filtering, without leveraging any external LLM pretraining. Our core contribution is the first systematic translation of developmental principles—namely, low-input volume, multimodal input, and lexically constrained exposure—into concrete architectural and training design choices. Experiments demonstrate that this paradigm matches or exceeds baseline performance across multiple benchmarks, validating that “high-quality, structured small-scale data + cognition-informed training” outperforms brute-force scaling. The results underscore that data quality and organizational structure are more decisive than sheer scale.

Technology Category

Application Category

📝 Abstract
In this work, we explain our approach employed in the BabyLM Challenge, which uses various methods of training language models (LMs) with significantly less data compared to traditional large language models (LLMs) and are inspired by how human children learn. While a human child is exposed to far less linguistic input than an LLM, they still achieve remarkable language understanding and generation abilities. To this end, we develop a model trained on a curated dataset consisting of 10 million words, primarily sourced from child-directed transcripts. The 2024 BabyLM Challenge initial dataset of 10M words is filtered to 8.5M. Next, it is supplemented with a randomly selected subset of TVR dataset consisting of 1.5M words of television dialogues. The latter dataset ensures that similar to children, the model is also exposed to language through media. Furthermore, we reduce the vocabulary size to 32,000 tokens, aligning it with the limited vocabulary of children in the early stages of language acquisition. We use curriculum learning and is able to match the baseline on certain benchmarks while surpassing the baseline on others. Additionally, incorporating common LLM training datasets, such as MADLAD-400, degrades performance. These findings underscore the importance of dataset selection, vocabulary scaling, and curriculum learning in creating more data-efficient language models that better mimic human learning processes.
Problem

Research questions and friction points this paper is trying to address.

Develop data-efficient language models inspired by child learning.
Train models with limited data and reduced vocabulary size.
Improve language understanding using curated child-directed datasets.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Child-inspired training with 10M words dataset
Reduced vocabulary to 32,000 tokens
Curriculum learning improves model efficiency
🔎 Similar Papers
No similar papers found.
M
Mohammad Amin Ghanizadeh
Department of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
Mohammad Javad Dousti
Mohammad Javad Dousti
University of Southern California
Natural Language ProcessingLarge Language ModelsBig Data