Precise Zero-Shot Pointwise Ranking with LLMs through Post-Aggregated Global Context Information

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In zero-shot document ranking, pointwise LLM-based methods suffer from inconsistent scoring and suboptimal performance due to their failure to model inter-document comparisons. To address this, we propose the Global Consistency Contrastive Pointwise (GCCP) ranking framework. Its core innovations are: (1) introducing query-aware pseudo-relevant anchor documents as globally comparable references, enabling cross-document comparative perception via prompt engineering for independent scoring; and (2) designing a training-free Post-Aggregation Global Context (PAGC) mechanism that lightweightly integrates global ranking consistency. Evaluated on TREC Deep Learning and BEIR benchmarks, GCCP significantly outperforms existing zero-shot pointwise methods—matching the effectiveness of pairwise and listwise models—while retaining millisecond-level inference latency. To our knowledge, GCCP is the first pointwise approach to efficiently embed global comparative reasoning without sacrificing efficiency.

Technology Category

Application Category

📝 Abstract
Recent advancements have successfully harnessed the power of Large Language Models (LLMs) for zero-shot document ranking, exploring a variety of prompting strategies. Comparative approaches like pairwise and listwise achieve high effectiveness but are computationally intensive and thus less practical for larger-scale applications. Scoring-based pointwise approaches exhibit superior efficiency by independently and simultaneously generating the relevance scores for each candidate document. However, this independence ignores critical comparative insights between documents, resulting in inconsistent scoring and suboptimal performance. In this paper, we aim to improve the effectiveness of pointwise methods while preserving their efficiency through two key innovations: (1) We propose a novel Global-Consistent Comparative Pointwise Ranking (GCCP) strategy that incorporates global reference comparisons between each candidate and an anchor document to generate contrastive relevance scores. We strategically design the anchor document as a query-focused summary of pseudo-relevant candidates, which serves as an effective reference point by capturing the global context for document comparison. (2) These contrastive relevance scores can be efficiently Post-Aggregated with existing pointwise methods, seamlessly integrating essential Global Context information in a training-free manner (PAGC). Extensive experiments on the TREC DL and BEIR benchmark demonstrate that our approach significantly outperforms previous pointwise methods while maintaining comparable efficiency. Our method also achieves competitive performance against comparative methods that require substantially more computational resources. More analyses further validate the efficacy of our anchor construction strategy.
Problem

Research questions and friction points this paper is trying to address.

Improve pointwise ranking efficiency with global context
Address inconsistent scoring in zero-shot document ranking
Enhance LLM-based ranking without computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Global-Consistent Comparative Pointwise Ranking (GCCP)
Post-Aggregated Global Context (PAGC) integration
Query-focused anchor document construction
🔎 Similar Papers
No similar papers found.
Kehan Long
Kehan Long
University of California San-Diego
RoboticsControlOptimizationArtificial Intelligence
S
Shasha Li
National University of Defense Technology, Changsha, China
C
Chen Xu
National University of Defense Technology, Changsha, China
Jintao Tang
Jintao Tang
National University of Defense Technology
natural language processing
T
Ting Wang
National University of Defense Technology, Changsha, China