🤖 AI Summary
This paper addresses the two-sample testing problem for determining whether two document collections are drawn from the same generative distribution in large-scale text data. We propose the first language-model-based entropy estimation framework for such tests. Methodologically, we (1) embed neural language model–estimated document entropy—e.g., from Transformer-based LMs—into a unified estimation-inference pipeline; and (2) construct an asymptotically normal test statistic based on entropy difference, augmented by a multi-split p-value aggregation strategy to enhance statistical power. The method rigorously controls Type-I error under both synthetic and real-world text benchmarks, while achieving substantially higher statistical power than existing text similarity–based two-sample tests. By grounding distributional comparison in information-theoretic principles, our approach establishes a verifiable, entropy-driven paradigm for assessing generative equivalence of textual sources.
📝 Abstract
The surge in digitized text data requires reliable inferential methods on observed textual patterns. This article proposes a novel two-sample text test for comparing similarity between two groups of documents. The hypothesis is whether the probabilistic mapping generating the textual data is identical across two groups of documents. The proposed test aims to assess text similarity by comparing the entropy of the documents. Entropy is estimated using neural network-based language models. The test statistic is derived from an estimation-and-inference framework, where the entropy is first approximated using an estimation set, followed by inference on the remaining data set. We showed theoretically that under mild conditions, the test statistic asymptotically follows a normal distribution. A multiple data-splitting strategy is proposed to enhance test power, which combines p-values into a unified decision. Various simulation studies and a real data example demonstrated that the proposed two-sample text test maintains the nominal Type one error rate while offering greater power compared to existing methods. The proposed method provides a novel solution to assert differences in document classes, particularly in fields where large-scale textual information is crucial.