Reliable Evaluation Protocol for Low-Precision Retrieval

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In low-precision retrieval, reduced numerical precision induces spurious ties in relevance scores, leading to ranking instability and unreliable evaluation. To address this, we propose a robust evaluation protocol: first, eliminate spurious ties via low-cost, high-precision score refinement; second, introduce tie-aware metrics that explicitly model and quantify ranking uncertainty arising from score granularity loss. Our protocol requires no model retraining and is compatible with diverse retrieval models and scoring functions. Experiments across multiple standard benchmarks demonstrate that our approach significantly reduces evaluation variance, accurately recovers ground-truth metric values, and enhances consistency and reliability in low-precision retrieval assessment. The core innovation lies in the synergistic integration of high-precision scoring and tie-aware measurement—constituting the first systematic solution to evaluation distortion caused by precision-induced granularity loss in low-precision retrieval.

Technology Category

Application Category

📝 Abstract
Lowering the numerical precision of model parameters and computations is widely adopted to improve the efficiency of retrieval systems. However, when computing relevance scores between the query and documents in low-precision, we observe spurious ties due to the reduced granularity. This introduces high variability in the results based on tie resolution, making the evaluation less reliable. To address this, we propose a more robust retrieval evaluation protocol designed to reduce score variation. It consists of: (1) High-Precision Scoring (HPS), which upcasts the final scoring step to higher precision to resolve tied candidates with minimal computational cost; and (2) Tie-aware Retrieval Metrics (TRM), which report expected scores, range, and bias to quantify order uncertainty of tied candidates. Our experiments test multiple models with three scoring functions on two retrieval datasets to demonstrate that HPS dramatically reduces tie-induced instability, and TRM accurately recovers expected metric values. This combination enables a more consistent and reliable evaluation system for lower-precision retrievals.
Problem

Research questions and friction points this paper is trying to address.

Spurious ties in low-precision retrieval evaluation cause high variability
Existing protocols lack robustness in handling tied relevance scores
Unreliable metrics due to reduced granularity in score computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

High-Precision Scoring resolves ties efficiently
Tie-aware Retrieval Metrics quantify uncertainty
Combination ensures reliable low-precision evaluation
🔎 Similar Papers
No similar papers found.