E-Scores for (In)Correctness Assessment of Generative Model Outputs

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods for correctness in generative models—especially large language models (LLMs)—lack principled statistical frameworks; p-value–based approaches are vulnerable to p-hacking and cannot adaptively set error-tolerance thresholds post-hoc. Method: We propose E-Scores, a novel error quantification framework grounded in e-values—nonnegative random variables that measure evidence against correctness—and inherently support post-hoc threshold selection without compromising statistical validity. By unifying conformal prediction with e-value theory, E-Scores provides a coherent assessment paradigm applicable to diverse correctness criteria, including mathematical factuality and attribute constraints. Contribution/Results: Experiments across multiple LLM output scenarios demonstrate that E-Scores significantly improve both reliability and flexibility in error detection. The framework offers a theoretically rigorous yet practical tool for trustworthy evaluation of generative model outputs.

Technology Category

Application Category

📝 Abstract
While generative models, especially large language models (LLMs), are ubiquitous in today's world, principled mechanisms to assess their (in)correctness are limited. Using the conformal prediction framework, previous works construct sets of LLM responses where the probability of including an incorrect response, or error, is capped at a desired user-defined tolerance level. However, since these methods are based on p-values, they are susceptible to p-hacking, i.e., choosing the tolerance level post-hoc can invalidate the guarantees. We therefore leverage e-values to complement generative model outputs with e-scores as a measure of incorrectness. In addition to achieving the same statistical guarantees as before, e-scores provide users flexibility in adaptively choosing tolerance levels after observing the e-scores themselves, by upper bounding a post-hoc notion of error called size distortion. We experimentally demonstrate their efficacy in assessing LLM outputs for different correctness types: mathematical factuality and property constraints satisfaction.
Problem

Research questions and friction points this paper is trying to address.

Developing principled assessment mechanisms for generative model outputs
Addressing p-hacking vulnerability in statistical correctness guarantees
Providing flexible tolerance selection for different correctness types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using e-values to measure generative model incorrectness
Providing flexible tolerance level selection post-evaluation
Achieving statistical guarantees without p-hacking vulnerability
🔎 Similar Papers
No similar papers found.