A Two-Sample Test of Text Generation Similarity

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the two-sample testing problem for determining whether two document collections are drawn from the same generative distribution in large-scale text data. We propose the first language-model-based entropy estimation framework for such tests. Methodologically, we (1) embed neural language model–estimated document entropy—e.g., from Transformer-based LMs—into a unified estimation-inference pipeline; and (2) construct an asymptotically normal test statistic based on entropy difference, augmented by a multi-split p-value aggregation strategy to enhance statistical power. The method rigorously controls Type-I error under both synthetic and real-world text benchmarks, while achieving substantially higher statistical power than existing text similarity–based two-sample tests. By grounding distributional comparison in information-theoretic principles, our approach establishes a verifiable, entropy-driven paradigm for assessing generative equivalence of textual sources.

Technology Category

Application Category

📝 Abstract
The surge in digitized text data requires reliable inferential methods on observed textual patterns. This article proposes a novel two-sample text test for comparing similarity between two groups of documents. The hypothesis is whether the probabilistic mapping generating the textual data is identical across two groups of documents. The proposed test aims to assess text similarity by comparing the entropy of the documents. Entropy is estimated using neural network-based language models. The test statistic is derived from an estimation-and-inference framework, where the entropy is first approximated using an estimation set, followed by inference on the remaining data set. We showed theoretically that under mild conditions, the test statistic asymptotically follows a normal distribution. A multiple data-splitting strategy is proposed to enhance test power, which combines p-values into a unified decision. Various simulation studies and a real data example demonstrated that the proposed two-sample text test maintains the nominal Type one error rate while offering greater power compared to existing methods. The proposed method provides a novel solution to assert differences in document classes, particularly in fields where large-scale textual information is crucial.
Problem

Research questions and friction points this paper is trying to address.

Develops a two-sample test for comparing document similarity
Assesses text similarity via entropy using neural networks
Enhances test power with multiple data-splitting strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural network-based entropy estimation for text
Two-sample test comparing document group similarity
Multiple data-splitting to enhance test power
🔎 Similar Papers
No similar papers found.
J
Jingbin Xu
School of Mechanical Engineering, Dalian University of Technology, Dalian, China
C
Chen Qian
School of Economics and Management, Dalian University of Technology, Dalian, China
Meimei Liu
Meimei Liu
Assistant Professor, Virginia Tech
statistical theory and methodvariational inferencenonparametric inferencestochastic algorithm
F
Feng Guo
Department of Statistics, Virginia Tech, Blacksburg, VA, USA