🤖 AI Summary
This study investigates which attributes of code data most effectively enhance the reasoning capabilities of large language models (LLMs).
Method: We construct a parallel instruction dataset spanning ten programming languages and apply controlled perturbations to systematically decouple structural and semantic properties of code. Fine-tuning and evaluation are conducted across five model families and eight parameter scales.
Contribution/Results: LLMs exhibit high sensitivity to structural perturbations; abstract representations—such as pseudocode and flowcharts—achieve performance comparable to real code; and syntactic style significantly affects task performance. Crucially, perturbed code preserving surface-level statistical regularities (e.g., indentation patterns, keyword distributions) retains strong reasoning efficacy, whereas moderate abstraction reduces token overhead while maintaining—or even improving—performance. This work provides the first empirical evidence that surface-level statistical regularities in code play a central role in LLM reasoning, challenging assumptions about the necessity of deep semantic fidelity for effective code-based reasoning.
📝 Abstract
Code data has been shown to enhance the reasoning capabilities of large language models (LLMs), but it remains unclear which aspects of code are most responsible. We investigate this question with a systematic, data-centric framework. We construct parallel instruction datasets in ten programming languages and apply controlled perturbations that selectively disrupt structural or semantic properties of code. We then finetune LLMs from five model families and eight scales on each variant and evaluate their performance on natural language, math, and code tasks. Across 3,331 experiments, our results show that LLMs are more vulnerable to structural perturbations than semantic ones, particularly on math and code tasks. Appropriate abstractions like pseudocode and flowcharts can be as effective as code, while encoding the same information with fewer tokens without adhering to original syntax can often retain or even improve performance. Remarkably, even corrupted code with misleading signals remains competitive when surface-level regularities persist. Finally, syntactic styles also shape task-specific gains with Python favoring natural language reasoning and lower-level languages such as Java and Rust favoring math. Through our systematic framework, we aim to provide insight into how different properties of code influence reasoning and inform the design of training data for enhancing LLM reasoning capabilities.