The Truncation Blind Spot: How Decoding Strategies Systematically Exclude Human-Like Token Choices

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical limitation in current text generation decoding strategies, which systematically exclude contextually plausible yet low-probability tokens frequently used by humans due to their overreliance on token likelihood. Through systematic experiments across 1.8 million texts, eight language models, five mainstream decoding methods, and 53 hyperparameter configurations, the work uncovers a previously undocumented “truncation blind spot”: standard approaches such as top-k and nucleus sampling routinely exclude 8–18% of human-chosen tokens from their sampling pools. The findings demonstrate that the detectability of machine-generated text stems primarily from decoding strategy rather than model capacity, and that naturalness and undetectability are inherently at odds. Moreover, a simple classifier leveraging predictability and lexical diversity suffices to reliably distinguish human- from machine-authored text.

Technology Category

Application Category

📝 Abstract
Standard decoding strategies for text generation, including top-k, nucleus sampling, and contrastive search, select tokens based on likelihood, restricting selection to high-probability regions. Human language production operates differently: tokens are chosen for communicative appropriateness rather than statistical frequency. This mismatch creates a truncation blind spot: contextually appropriate but statistically rare tokens remain accessible to humans yet unreachable by likelihood-based decoding. We hypothesize this contributes to the detectability of machine-generated text. Analyzing over 1.8 million texts across eight language models, five decoding strategies, and 53 hyperparameter configurations, we find that 8-18% of human-selected tokens fall outside typical truncation boundaries. Simple classifiers trained on predictability and lexical diversity achieve remarkable detection rates. Crucially, neither model scale nor architecture correlates strongly with detectability; truncation parameters account for most variance. Configurations achieving low detectability often produce incoherent text, indicating that evading detection and producing natural text are distinct objectives. These findings suggest detectability is enhanced by likelihood-based token selection, not merely a matter of model capability.
Problem

Research questions and friction points this paper is trying to address.

truncation blind spot
decoding strategies
human-like text generation
machine-generated text detection
token selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

truncation blind spot
likelihood-based decoding
machine text detectability
human-like token selection
decoding strategies
🔎 Similar Papers
No similar papers found.