From Data to Knowledge: Evaluating How Efficiently Language Models Learn Facts

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the efficiency with which language models acquire factual knowledge distributed in long-tailed patterns, particularly examining disparities in memorizing high-frequency versus low-frequency relational facts and how these disparities are modulated by model architecture and scale. We first construct a fine-grained frequency annotation schema for relational facts within training corpora. Leveraging a unified pretraining dataset, we integrate relation extraction, frequency counting, and zero-/few-shot fact retrieval evaluation across multiple Transformer variants of varying scales. Key contributions include: (1) introducing “fact learning efficiency” as a novel evaluation dimension; (2) revealing performance convergence on high-frequency facts across models, contrasted with substantial divergence on low-frequency facts; and (3) demonstrating that parameter count is not the primary determinant of low-frequency fact acquisition—certain medium- and small-scale models achieve superior sample efficiency.

Technology Category

Application Category

📝 Abstract
Sample efficiency is a crucial property of language models with practical implications for training efficiency. In real-world text, information follows a long-tailed distribution. Yet, we expect models to learn and recall frequent and infrequent facts. Sample-efficient models are better equipped to handle this challenge of learning and retaining rare information without requiring excessive exposure. This study analyzes multiple models of varying architectures and sizes, all trained on the same pre-training data. By annotating relational facts with their frequencies in the training corpus, we examine how model performance varies with fact frequency. Our findings show that most models perform similarly on high-frequency facts but differ notably on low-frequency facts. This analysis provides new insights into the relationship between model architecture, size, and factual learning efficiency.
Problem

Research questions and friction points this paper is trying to address.

Evaluating how efficiently language models learn facts
Analyzing model performance on high and low frequency facts
Exploring relationship between model architecture, size, and learning efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing models with varied architectures and sizes
Annotating relational facts by training frequency
Comparing performance on high and low frequency facts
🔎 Similar Papers
No similar papers found.