RevisEval: Improving LLM-as-a-Judge via Response-Adapted References

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation methods suffer from insufficient correlation with human judgments due to the lack of semantically adaptive reference standards. To address this, we propose RevisEval—a novel evaluation paradigm that dynamically revises candidate responses using LLMs to generate “response-adaptive references” semantically aligned with the target output, thereby enhancing both interpretability and reliability of evaluation. This is the first framework to dynamically tailor references to individual responses. RevisEval significantly improves the human-correlation of conventional metrics (e.g., BLEU, BERTScore) by +12.3%–18.7%, and—critically—demonstrates for the first time that such adaptive references can match or even surpass the performance of direct LLM-based judgment. Extensive experiments across diverse NLG tasks and open-ended instruction-following benchmarks show that RevisEval consistently outperforms both reference-free and fixed-reference baselines, establishing a new paradigm for trustworthy automatic evaluation.

Technology Category

Application Category

📝 Abstract
With significant efforts in recent studies, LLM-as-a-Judge has become a cost-effective alternative to human evaluation for assessing text generation quality in a wide range of tasks. However, there still remains a reliability gap between LLM-as-a-Judge and human evaluation. One important reason is the lack of guided oracles in the evaluation process. Motivated by the role of reference pervasively used in classic text evaluation, we introduce RevisEval, a novel text generation evaluation paradigm via the response-adapted references. RevisEval is driven by the key observation that an ideal reference should maintain the necessary relevance to the response to be evaluated. Specifically, RevisEval leverages the text revision capabilities of large language models (LLMs) to adaptively revise the response, then treat the revised text as the reference (response-adapted reference) for the subsequent evaluation. Extensive experiments demonstrate that RevisEval outperforms traditional reference-free and reference-based evaluation paradigms that use LLM-as-a-Judge across NLG tasks and open-ended instruction-following tasks. More importantly, our response-adapted references can further boost the classical text metrics, e.g., BLEU and BERTScore, compared to traditional references and even rival the LLM-as-a-Judge. A detailed analysis is also conducted to confirm RevisEval's effectiveness in bias reduction, the impact of inference cost, and reference relevance.
Problem

Research questions and friction points this paper is trying to address.

Improves LLM-as-a-Judge reliability versus human evaluation.
Introduces response-adapted references for better text evaluation.
Boosts classical text metrics like BLEU and BERTScore.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts responses as references
Uses LLMs for text revision
Enhances BLEU and BERTScore metrics
🔎 Similar Papers
No similar papers found.