🤖 AI Summary
This work investigates how large language model (LLM)-generated text affects term-based retrieval models (e.g., BM25), specifically examining whether such text introduces inherent human- or machine-origin bias in ranking. Using linguistic analysis—including Zipf’s law fitting, term frequency distribution modeling, and document diversity metrics—we systematically compare LLM-generated and human-written texts in terms of lexical statistics and retrieval behavior. Results show that LLM text exhibits smoother high-frequency term distributions and steeper low-frequency tails, yielding higher term specificity and greater inter-document lexical diversity. Crucially, BM25 ranking is governed primarily by query–document term distribution alignment, not by textual origin—thus revealing no intrinsic “machine-generation bias.” This study provides the first term-statistical explanation of LLM content’s impact on classical information retrieval models, offering theoretical foundations for designing robust retrieval systems handling heterogeneous (human + machine) text sources.
📝 Abstract
As more content generated by large language models (LLMs) floods into the Internet, information retrieval (IR) systems now face the challenge of distinguishing and handling a blend of human-authored and machine-generated texts. Recent studies suggest that neural retrievers may exhibit a preferential inclination toward LLM-generated content, while classic term-based retrievers like BM25 tend to favor human-written documents. This paper investigates the influence of LLM-generated content on term-based retrieval models, which are valued for their efficiency and robust generalization across domains. Our linguistic analysis reveals that LLM-generated texts exhibit smoother high-frequency and steeper low-frequency Zipf slopes, higher term specificity, and greater document-level diversity. These traits are aligned with LLMs being trained to optimize reader experience through diverse and precise expressions. Our study further explores whether term-based retrieval models demonstrate source bias, concluding that these models prioritize documents whose term distributions closely correspond to those of the queries, rather than displaying an inherent source bias. This work provides a foundation for understanding and addressing potential biases in term-based IR systems managing mixed-source content.