PosIR: Position-Aware Heterogeneous Information Retrieval Benchmark

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing information retrieval benchmarks in disentangling positional bias from genuine retrieval capability within long documents. To this end, we propose PosIR—a multilingual, position-aware retrieval benchmark spanning 10 languages and 31 domains—that systematically decouples document length from information location by precisely anchoring relevance labels to specific reference text spans. Evaluation of 10 state-of-the-art embedding models on PosIR reveals pervasive primacy or recency biases among mainstream architectures. Furthermore, model performance on long-context retrieval shows only weak correlation with scores on short-text benchmarks such as MMTEB, underscoring PosIR’s critical utility in diagnosing and advancing positionally robust retrieval systems.

Technology Category

Application Category

📝 Abstract
While dense retrieval models have achieved remarkable success, rigorous evaluation of their sensitivity to the position of relevant information (i.e., position bias) remains largely unexplored. Existing benchmarks typically employ position-agnostic relevance labels, conflating the challenge of processing long contexts with the bias against specific evidence locations. To address this challenge, we introduce PosIR (Position-Aware Information Retrieval), a comprehensive benchmark designed to diagnose position bias in diverse retrieval scenarios. PosIR comprises 310 datasets spanning 10 languages and 31 domains, constructed through a rigorous pipeline that ties relevance to precise reference spans, enabling the strict disentanglement of document length from information position. Extensive experiments with 10 state-of-the-art embedding models reveal that: (1) Performance on PosIR in long-context settings correlates poorly with the MMTEB benchmark, exposing limitations in current short-text benchmarks; (2) Position bias is pervasive and intensifies with document length, with most models exhibiting primacy bias while certain models show unexpected recency bias; (3) Gradient-based saliency analysis further uncovers the distinct internal attention mechanisms driving these positional preferences. In summary, PosIR serves as a valuable diagnostic framework to foster the development of position-robust retrieval systems.
Problem

Research questions and friction points this paper is trying to address.

position bias
dense retrieval
information retrieval
long-context
relevance evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

position bias
dense retrieval
information retrieval benchmark
long-context evaluation
saliency analysis
🔎 Similar Papers
Ziyang Zeng
Ziyang Zeng
Beijing University of Posts and Telecommunications
Information RetrievalLarge Language ModelReinforcement Learning
D
Dun Zhang
Prior Shape
Y
Yu Yan
Beijing University of Posts and Telecommunications
X
Xu Sun
Université Caen Normandie, ENSICAEN, CNRS, Normandie Univ., GREYC UMR6072, F-14000 Caen, France
Y
Yudong Zhou
Prior Shape
Yuqing Yang
Yuqing Yang
Beijing University of Posts and Telecommunications
Machine LearningBioinformaticsMedical Informatics