Low-Perplexity LLM-Generated Sequences and Where To Find Them

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Understanding how large language models (LLMs) memorize training data is critical for ensuring transparency, accountability, privacy, and fairness. This paper introduces a systematic provenance tracing method that combines low-perplexity sequence detection, deduplication, alignment, and scalable, efficient text-matching algorithms to precisely attribute generated content to its original training corpus. Experiments reveal that a substantial fraction of high-probability generations cannot be localized in the source corpus—uncovering a widespread “unmapped memory” phenomenon wherein LLMs reproduce training data without detectable lexical or structural correspondence to existing provenance techniques. The study provides the first quantitative characterization of the distribution of traceable versus untraceable low-perplexity sequences, empirically confirming the coexistence of direct memorization and implicit reproduction. These findings establish a novel methodology and empirical foundation for data provenance analysis, copyright assessment, and model auditing.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) become increasingly widespread, understanding how specific training data shapes their outputs is crucial for transparency, accountability, privacy, and fairness. To explore how LLMs leverage and replicate their training data, we introduce a systematic approach centered on analyzing low-perplexity sequences - high-probability text spans generated by the model. Our pipeline reliably extracts such long sequences across diverse topics while avoiding degeneration, then traces them back to their sources in the training data. Surprisingly, we find that a substantial portion of these low-perplexity spans cannot be mapped to the corpus. For those that do match, we quantify the distribution of occurrences across source documents, highlighting the scope and nature of verbatim recall and paving a way toward better understanding of how LLMs training data impacts their behavior.
Problem

Research questions and friction points this paper is trying to address.

Analyzing low-perplexity sequences in LLM outputs
Tracing model-generated text to training data sources
Understanding verbatim recall distribution in LLM behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes low-perplexity sequences from LLMs
Traces sequences back to training data sources
Quantifies distribution of verbatim recall occurrences
A
Arthur Wuhrmann
École Polytechnique Fédérale de Lausanne, Switzerland
A
Anastasiia Kucherenko
Institute of Entrepreneurship and Management, HES-SO Valais-Wallis, Switzerland
Andrei Kucharavy
Andrei Kucharavy
Assistant Professor, HES-SO Valais-Wallis
Machine LearningEvolutionDistributed ComputationComputational Biology