How Do LLM-Generated Texts Impact Term-Based Retrieval Models?

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how large language model (LLM)-generated text affects term-based retrieval models (e.g., BM25), specifically examining whether such text introduces inherent human- or machine-origin bias in ranking. Using linguistic analysis—including Zipf’s law fitting, term frequency distribution modeling, and document diversity metrics—we systematically compare LLM-generated and human-written texts in terms of lexical statistics and retrieval behavior. Results show that LLM text exhibits smoother high-frequency term distributions and steeper low-frequency tails, yielding higher term specificity and greater inter-document lexical diversity. Crucially, BM25 ranking is governed primarily by query–document term distribution alignment, not by textual origin—thus revealing no intrinsic “machine-generation bias.” This study provides the first term-statistical explanation of LLM content’s impact on classical information retrieval models, offering theoretical foundations for designing robust retrieval systems handling heterogeneous (human + machine) text sources.

Technology Category

Application Category

📝 Abstract
As more content generated by large language models (LLMs) floods into the Internet, information retrieval (IR) systems now face the challenge of distinguishing and handling a blend of human-authored and machine-generated texts. Recent studies suggest that neural retrievers may exhibit a preferential inclination toward LLM-generated content, while classic term-based retrievers like BM25 tend to favor human-written documents. This paper investigates the influence of LLM-generated content on term-based retrieval models, which are valued for their efficiency and robust generalization across domains. Our linguistic analysis reveals that LLM-generated texts exhibit smoother high-frequency and steeper low-frequency Zipf slopes, higher term specificity, and greater document-level diversity. These traits are aligned with LLMs being trained to optimize reader experience through diverse and precise expressions. Our study further explores whether term-based retrieval models demonstrate source bias, concluding that these models prioritize documents whose term distributions closely correspond to those of the queries, rather than displaying an inherent source bias. This work provides a foundation for understanding and addressing potential biases in term-based IR systems managing mixed-source content.
Problem

Research questions and friction points this paper is trying to address.

Investigating LLM-generated content's impact on term-based retrieval models
Assessing source bias in retrieval systems handling mixed human-machine texts
Analyzing linguistic traits affecting retrieval performance in mixed corpora
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes linguistic patterns in LLM texts using Zipf slopes
Measures term specificity and document diversity metrics
Tests source bias through term distribution correspondence analysis
🔎 Similar Papers
No similar papers found.
W
Wei Huang
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China
Keping Bi
Keping Bi
Institute of Computing Technology, Chinese Academy of Sciences
Information Retrieval
Yinqiong Cai
Yinqiong Cai
Institute of Computing Technology, Chinese Academy of Sciences
Information RetrievalNLPDeep Learning
W
Wei Chen
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China
Jiafeng Guo
Jiafeng Guo
Professor, Institute of Computing Techonology, CAS
Information RetrievalMachine LearningText AnalysisNeuIR
Xueqi Cheng
Xueqi Cheng
Ph.D. student, Florida State University
Data miningLLMGNNComputational social science