🤖 AI Summary
This study investigates whether large language models (LLMs) solve benchmark tasks (e.g., LeetCode, MATH) via genuine reasoning or mere memorization of training data, by subjecting them to extreme textual corruption—including aggressive token masking and random perturbations. We introduce *active pattern matching*, a novel evaluation framework that systematically injects noise and applies red-teaming perturbations, then analyzes performance decay curves to distinguish data-contaminated tasks (exhibiting gradual degradation) from genuinely unseen ones (showing sharp decline). Experiments across multiple models (GPT, Claude, Llama) reveal consistent ability to solve human-unreadable corrupted problems—further corroborated by knowledge-cutoff ablation studies confirming data recitation as the underlying mechanism. Our work provides a principled methodology for disentangling memorization from reasoning, with direct implications for robust benchmark design, trustworthy model evaluation, and quantification of AI safety risks.
📝 Abstract
This paper investigates the ability of large language models (LLMs) to recognise and solve tasks which have been obfuscated beyond recognition. Focusing on competitive programming and benchmark tasks (LeetCode and MATH), we compare performance across multiple models and obfuscation methods, such as noise and redaction. We demonstrate that all evaluated LLMs can solve tasks obfuscated to a level where the text would be unintelligible to human readers, and does not contain key pieces of instruction or context. We introduce the concept of eager pattern matching to describe this behaviour, which is not observed in tasks published after the models' knowledge cutoff date, indicating strong memorisation or overfitting to training data, rather than legitimate reasoning about the presented problem. We report empirical evidence of distinct performance decay patterns between contaminated and unseen datasets. We discuss the implications for benchmarking and evaluations of model behaviour, arguing for caution when designing experiments using standard datasets. We also propose measuring the decay of performance under obfuscation as a possible strategy for detecting dataset contamination and highlighting potential safety risks and interpretability issues for automated software systems.