PRobELM: Plausibility Ranking Evaluation for Language Models

📅 2024-04-04
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the evaluation of large language models’ (LLMs) ability to perform plausibility ranking—based on parametric world knowledge—in “possible but unverified” scenarios. To this end, we introduce PRobELM, a novel benchmark designed to systematically assess LLMs’ capacity to discriminate causal and commonsense plausibility—not mere factual accuracy—thereby filling a critical gap in existing evaluation frameworks. PRobELM innovatively constructs a dynamic, time-sensitive test suite using temporally aligned Wikidata edit histories, covering three prompt paradigms: statement verification, text completion, and question answering. Extensive experiments across diverse model architectures, parameter scales, and training-data recency settings reveal that factual accuracy correlates insignificantly with plausibility ranking performance; conversely, training data recency consistently enhances plausibility judgment across all architectures. These findings establish PRobELM as a principled, knowledge-intensive evaluation paradigm—particularly valuable for literature-driven discovery and other applications demanding robust reasoning over uncertain or evolving knowledge.

Technology Category

Application Category

📝 Abstract
This paper introduces PRobELM (Plausibility Ranking Evaluation for Language Models), a benchmark designed to assess language models' ability to discern more plausible from less plausible scenarios through their parametric knowledge. While benchmarks such as TruthfulQA emphasise factual accuracy or truthfulness, and others such as COPA explore plausible scenarios without explicitly incorporating world knowledge, PRobELM seeks to bridge this gap by evaluating models' capabilities to prioritise plausible scenarios that leverage world knowledge over less plausible alternatives. This design allows us to assess the potential of language models for downstream use cases such as literature-based discovery where the focus is on identifying information that is likely but not yet known. Our benchmark is constructed from a dataset curated from Wikidata edit histories, tailored to align the temporal bounds of the training data for the evaluated models. PRobELM facilitates the evaluation of language models across multiple prompting types, including statement, text completion, and question-answering. Experiments with 10 models of various sizes and architectures on the relationship between model scales, training recency, and plausibility performance, reveal that factual accuracy does not directly correlate with plausibility performance and that up-to-date training data enhances plausibility assessment across different model architectures.
Problem

Research questions and friction points this paper is trying to address.

Assess language models' plausibility ranking ability
Bridge gap between factual accuracy and plausibility
Evaluate models for literature-based discovery potential
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plausibility ranking evaluation
Wikidata edit histories dataset
Multiple prompting types evaluation
🔎 Similar Papers
No similar papers found.
Moy Yuan
Moy Yuan
PhD Student, University of Cambridge
Natural Language Processing
Chenxi Whitehouse
Chenxi Whitehouse
Research Scientist at Meta
Natural Language Processing
E
Eric Chamoun
Department of Computer Science and Technology, University of Cambridge
Rami Aly
Rami Aly
University of Cambridge
Fact-checkingQuestion answeringLow-resource NLP
A
Andreas Vlachos
Department of Computer Science and Technology, University of Cambridge