🤖 AI Summary
This study investigates the efficiency with which language models acquire factual knowledge distributed in long-tailed patterns, particularly examining disparities in memorizing high-frequency versus low-frequency relational facts and how these disparities are modulated by model architecture and scale. We first construct a fine-grained frequency annotation schema for relational facts within training corpora. Leveraging a unified pretraining dataset, we integrate relation extraction, frequency counting, and zero-/few-shot fact retrieval evaluation across multiple Transformer variants of varying scales. Key contributions include: (1) introducing “fact learning efficiency” as a novel evaluation dimension; (2) revealing performance convergence on high-frequency facts across models, contrasted with substantial divergence on low-frequency facts; and (3) demonstrating that parameter count is not the primary determinant of low-frequency fact acquisition—certain medium- and small-scale models achieve superior sample efficiency.
📝 Abstract
Sample efficiency is a crucial property of language models with practical implications for training efficiency. In real-world text, information follows a long-tailed distribution. Yet, we expect models to learn and recall frequent and infrequent facts. Sample-efficient models are better equipped to handle this challenge of learning and retaining rare information without requiring excessive exposure. This study analyzes multiple models of varying architectures and sizes, all trained on the same pre-training data. By annotating relational facts with their frequencies in the training corpus, we examine how model performance varies with fact frequency. Our findings show that most models perform similarly on high-frequency facts but differ notably on low-frequency facts. This analysis provides new insights into the relationship between model architecture, size, and factual learning efficiency.