CodeComplex: Dataset for Worst-Case Time Complexity Prediction

📅 2024-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for evaluating code time-complexity prediction suffer from small-scale, monolingual datasets and coarse-grained complexity labels, limiting their ability to assess models’ deep reasoning about worst-case asymptotic complexity. Method: We introduce CodeComplexity—the first large-scale, cross-lingual (Java/Python) benchmark (9.8K samples)—focusing on input-sensitive nested loops and conditional structures. We propose input-aware, fine-grained complexity annotations and a bias-aware evaluation metric beyond categorical accuracy. Our methodology includes an algorithm-expert collaborative annotation protocol, multilingual controllable code generation, and a dedicated evaluation framework targeting complexity reasoning. Results: We open-source the dataset and baseline models, validating effectiveness and robustness across multiple LLMs. Experiments demonstrate improved assessment fidelity for worst-case complexity reasoning, highlight annotation bias mitigation, and establish new standards for rigorous, input-sensitive complexity evaluation.

Technology Category

Application Category

📝 Abstract
Reasoning ability of Large Language Models (LLMs) is a crucial ability, especially in complex decision-making tasks. One significant task to show LLMs' reasoning capability is code time complexity prediction, which involves various intricate factors such as the input range of variables and conditional loops. Current benchmarks fall short of providing a rigorous assessment due to limited data, language constraints, and insufficient labeling. They do not consider time complexity based on input representation and merely evaluate whether predictions fall into the same class, lacking a measure of how close incorrect predictions are to the correct ones. To address these dependencies, we introduce CodeComplex, the first robust and extensive dataset designed to evaluate LLMs' reasoning abilities in predicting code time complexity. CodeComplex comprises 4,900 Java codes and an equivalent number of Python codes, overcoming language and labeling constraints, carefully annotated with complexity labels based on input characteristics by a panel of algorithmic experts. Additionally, we propose specialized evaluation metrics for the reasoning of complexity prediction tasks, offering a more precise and reliable assessment of LLMs' reasoning capabilities. We release our dataset (https://github.com/sybaik1/CodeComplex-Data) and baseline models (https://github.com/sybaik1/CodeComplex-Models) publicly to encourage the relevant (NLP, SE, and PL) communities to utilize and participate in this research.
Problem

Research questions and friction points this paper is trying to address.

Language Model Evaluation
Code Execution Time Prediction
Complex Decision-Making
Innovation

Methods, ideas, or system contributions that make the work stand out.

CodeComplex
LanguageModelEvaluation
CodeRuntimePrediction
🔎 Similar Papers
No similar papers found.
S
Seung-Yeop Baik
Yonsei University, Seoul, South Korea
Joonghyuk Hahn
Joonghyuk Hahn
yonsei university
formal languagestheory of computationNLPAI
J
Jungin Kim
Yonsei University, Seoul, South Korea
A
Aditi
University of Seoul, Seoul, South Korea
M
Mingi Jeon
Kangwon National University, Chuncheon, South Korea
Yo-Sub Han
Yo-Sub Han
School of Computing, Yonsei University
automata theoryformal languagesalgorithm designinformation retrieval
Sang-Ki Ko
Sang-Ki Ko
University of Seoul
Theory of ComputationAlgorithmArtificial Intelligence