LLM Performance for Code Generation on Noisy Tasks

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) solve benchmark tasks (e.g., LeetCode, MATH) via genuine reasoning or mere memorization of training data, by subjecting them to extreme textual corruption—including aggressive token masking and random perturbations. We introduce *active pattern matching*, a novel evaluation framework that systematically injects noise and applies red-teaming perturbations, then analyzes performance decay curves to distinguish data-contaminated tasks (exhibiting gradual degradation) from genuinely unseen ones (showing sharp decline). Experiments across multiple models (GPT, Claude, Llama) reveal consistent ability to solve human-unreadable corrupted problems—further corroborated by knowledge-cutoff ablation studies confirming data recitation as the underlying mechanism. Our work provides a principled methodology for disentangling memorization from reasoning, with direct implications for robust benchmark design, trustworthy model evaluation, and quantification of AI safety risks.

Technology Category

Application Category

📝 Abstract
This paper investigates the ability of large language models (LLMs) to recognise and solve tasks which have been obfuscated beyond recognition. Focusing on competitive programming and benchmark tasks (LeetCode and MATH), we compare performance across multiple models and obfuscation methods, such as noise and redaction. We demonstrate that all evaluated LLMs can solve tasks obfuscated to a level where the text would be unintelligible to human readers, and does not contain key pieces of instruction or context. We introduce the concept of eager pattern matching to describe this behaviour, which is not observed in tasks published after the models' knowledge cutoff date, indicating strong memorisation or overfitting to training data, rather than legitimate reasoning about the presented problem. We report empirical evidence of distinct performance decay patterns between contaminated and unseen datasets. We discuss the implications for benchmarking and evaluations of model behaviour, arguing for caution when designing experiments using standard datasets. We also propose measuring the decay of performance under obfuscation as a possible strategy for detecting dataset contamination and highlighting potential safety risks and interpretability issues for automated software systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' ability to solve highly obfuscated coding tasks
Analyzes performance decay patterns in contaminated vs unseen datasets
Proposes obfuscation-based methods to detect dataset contamination
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs solve obfuscated tasks intelligibly
Introduce eager pattern matching concept
Measure performance decay under obfuscation
🔎 Similar Papers
No similar papers found.
R
Radzim Sendyka
University of Cambridge
C
Christian Cabrera
University of Cambridge
Andrei Paleyes
Andrei Paleyes
PhD Candidate, University of Cambridge
Machine learningstatistical emulationsoftware
D
Diana Robinson
University of Cambridge
N
Neil Lawrence
University of Cambridge