Language Model Preference Evaluation with Multiple Weak Evaluators

📅 2024-10-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM preference evaluation often suffers from circular contradictions and inconsistent judgments due to overreliance on a single strong discriminator. This paper proposes GED—a Graph-based Ensemble and DAG-constrained framework—that constructs a preference graph from multiple weak evaluators, then applies graph ensemble learning and directed acyclic graph (DAG) constraints to jointly denoise and eliminate cycles, yielding robust and consistent preference rankings. We theoretically prove that coordinated weak evaluators can surpass the discriminative performance of a single large model. GED introduces the first preference graph ensemble method with explicit cycle-consistency elimination, ensuring faithful recovery of the underlying preference structure. Evaluated on ten benchmarks, GED significantly improves model ranking accuracy, response selection, and alignment quality. Notably, ensembles of weaker models—e.g., Llama3-8B variants—achieve higher preference discrimination accuracy than the standalone Qwen2-72B.

Technology Category

Application Category

📝 Abstract
Despite the remarkable success of Large Language Models (LLMs), evaluating their outputs' quality regarding *preference* remains a critical challenge. Existing works usually leverage a powerful LLM (e.g., GPT4) as the judge for comparing LLMs' output pairwisely, yet such model-based evaluator is vulnerable to *conflicting preference*, i.e., output A is better than B, B than C, but C than A, causing contradictory evaluation results. To improve model-based preference evaluation, we introduce GED (Preference Graph Ensemble and Denoise), a novel approach that leverages multiple model-based evaluators to construct preference graphs, and then ensemble and denoise these graphs for better, non-contradictory evaluation results. In particular, our method consists of two primary stages: aggregating evaluations into a unified graph and applying a denoising process to eliminate cyclic inconsistencies, ensuring a directed acyclic graph (DAG) structure. We provide theoretical guarantees for our framework, demonstrating its efficacy in recovering the ground truth preference structure. Extensive experiments across ten benchmark datasets show that GED outperforms baseline methods in model ranking, response selection, and model alignment tasks. Notably, GED combines weaker evaluators like Llama3-8B, Mistral-7B, and Qwen2-7B to surpass the performance of stronger evaluators like Qwen2-72B, highlighting its ability to enhance evaluation reliability and improve model performance.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Output Quality Assessment
Inconsistent Preference Judgments
Innovation

Methods, ideas, or system contributions that make the work stand out.

GED
Preference Assessment
DAG Integration
🔎 Similar Papers
No similar papers found.