Don't Use LLMs to Make Relevance Judgments

📅 2024-09-23
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) can reliably replace human assessors in TREC-style information retrieval (IR) relevance judging. Method: We conduct controlled experiments, reproduce annotations from the TREC Deep Learning Track, perform systematic prompt engineering, and analyze output consistency. Contribution/Results: We find that LLM-generated judgments exhibit significant systematic bias, factual hallucinations, and poor reproducibility—yielding Cohen’s Kappa scores consistently below 0.3 against human annotations. Errors predominantly stem from semantic overgeneralization and fabrication of domain-specific facts. This study is the first to empirically and methodologically demonstrate that LLMs fail to meet IR evaluation’s foundational requirements of reliability, interpretability, and reproducibility. It establishes a clear boundary for human judgment’s irreplaceability in relevance assessment and proposes practical guidelines to ensure evaluation trustworthiness.

Technology Category

Application Category

📝 Abstract
Making the relevance judgments for a TREC-style test collection can be complex and expensive. A typical TREC track usually involves a team of six contractors working for 2-4 weeks. Those contractors need to be trained and monitored. Software has to be written to support recording relevance judgments correctly and efficiently. The recent advent of large language models that produce astoundingly human-like flowing text output in response to a natural language prompt has inspired IR researchers to wonder how those models might be used in the relevance judgment collection process. At the ACM SIGIR 2024 conference, a workshop ``LLM4Eval'' provided a venue for this work, and featured a data challenge activity where participants reproduced TREC deep learning track judgments, as was done by Thomas et al (arXiv:2408.08896, arXiv:2309.10621). I was asked to give a keynote at the workshop, and this paper presents that keynote in article form. The bottom-line-up-front message is, don't use LLMs to create relevance judgments for TREC-style evaluations.
Problem

Research questions and friction points this paper is trying to address.

Avoid using LLMs for TREC-style relevance judgments
Traditional relevance judgments are complex and costly
LLMs may not reliably replicate human judgment accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Avoid LLMs for TREC relevance judgments
Manual training and monitoring required
Specialized software supports judgment recording
🔎 Similar Papers
No similar papers found.