🤖 AI Summary
Traditional search evaluation overrelies on relevance labels, failing to reflect actual user goal achievement. This paper proposes TRUE—a task-aware, rubric-driven, LLM-based evaluation paradigm centered on *usefulness*. Methodologically, TRUE integrates search session modeling, rubric-guided structured annotation, iterative sampling, and chain-of-thought reasoning, alongside a token-efficient fine-tuning strategy. It is the first work to systematically demonstrate that LLMs can disentangle relevance from usefulness judgments; it establishes a comprehensive usefulness metric system and identifies high-information-gain label combinations. Experiments show that fine-tuned models achieve significantly improved accuracy in context-aware usefulness classification, with moderate inter-annotator agreement—enabling scalable, low-cost deployment.
📝 Abstract
Evaluation is fundamental in optimizing search experiences and supporting diverse user intents in Information Retrieval (IR). Traditional search evaluation methods primarily rely on relevance labels, which assess how well retrieved documents match a user's query. However, relevance alone fails to capture a search system's effectiveness in helping users achieve their search goals, making usefulness a critical evaluation criterion. In this paper, we explore an alternative approach: LLM-generated usefulness labels, which incorporate both implicit and explicit user behavior signals to evaluate document usefulness. We propose Task-aware Rubric-based Usefulness Evaluation (TRUE), a rubric-driven evaluation method that employs iterative sampling and reasoning to model complex search behavior patterns. Our findings show that (i) LLMs can generate moderate usefulness labels by leveraging comprehensive search session history incorporating personalization and contextual understanding, and (ii) fine-tuned LLMs improve usefulness judgments when provided with structured search session contexts. Additionally, we examine whether LLMs can distinguish between relevance and usefulness, particularly in cases where this divergence impacts search success. We also conduct an ablation study to identify key metrics for accurate usefulness label generation, optimizing for token efficiency and cost-effectiveness in real-world applications. This study advances LLM-based usefulness evaluation by refining key user metrics, exploring LLM-generated label reliability, and ensuring feasibility for large-scale search systems.