OLMES: A Standard for Language Model Evaluations

📅 2024-06-12
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM evaluation lacks standardized practices—particularly in prompt design, in-context example selection, probability normalization, and task formalization—leading to poor reproducibility and unreliable cross-model comparisons. Method: We introduce OLMES, the first open-source, fully documented LLM evaluation standard. It systematically identifies and standardizes previously overlooked evaluation variables, enabling fair comparison between cloze-style and natural-language tasks. Grounded in empirical analysis, ablation studies, and comprehensive literature review, OLMES establishes technical specifications for prompt engineering, in-context learning configuration, logit calibration, and multi-granularity task adaptation. Contribution/Results: OLMES significantly improves the reliability, reproducibility, and comparability of LLM performance evaluations across model scales. By codifying best practices and resolving methodological inconsistencies, it advances LLM evaluation toward scientific rigor and community-wide standardization.

Technology Category

Application Category

📝 Abstract
Progress in AI is often demonstrated by new models claiming improved performance on tasks measuring model capabilities. Evaluating language models can be particularly challenging, as choices of how a model is evaluated on a task can lead to large changes in measured performance. There is no common standard setup, so different models are evaluated on the same tasks in different ways, leading to claims about which models perform best not being reproducible. We propose OLMES, a completely documented, practical, open standard for reproducible LLM evaluations. In developing this standard, we identify and review the varying factors in evaluation practices adopted by the community - such as details of prompt formatting, choice of in-context examples, probability normalizations, and task formulation. In particular, OLMES supports meaningful comparisons between smaller base models that require the unnatural"cloze"formulation of multiple-choice questions against larger models that can utilize the original formulation. OLMES includes well-considered, documented recommendations guided by results from existing literature as well as new experiments resolving open questions.
Problem

Research questions and friction points this paper is trying to address.

Lack of standardized language model evaluation
Irreproducible performance claims in AI
Need for consistent LLM assessment methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized LLM evaluation framework
Documented prompt formatting details
Supports base and large model comparisons
🔎 Similar Papers
No similar papers found.
Y
Yuling Gu
Allen Institute for Artificial Intelligence
O
Oyvind Tafjord
Allen Institute for Artificial Intelligence
Bailey Kuehl
Bailey Kuehl
Allen Institute for AI
D
Dany Haddad
Allen Institute for Artificial Intelligence
Jesse Dodge
Jesse Dodge
Allen Institute for AI
NLPMachine Learning
Hannaneh Hajishirzi
Hannaneh Hajishirzi
University of Washington; Allen AI
NLPLangauge modelsAI