🤖 AI Summary
Existing large language models (LLMs) lack systematic evaluation of their ability to understand and generate code with correct time/space complexity.
Method: We introduce ComplexityBench—the first benchmark explicitly designed for complexity-aware code evaluation—comprising 3,105 programming problems and over one million solutions. We propose an automated complexity-labeling framework integrating Python profiling, dynamic input-scaling experiments, and synthetic complexity inference algorithms to assign empirically validated time/space complexity labels.
Contribution/Results: ComplexityBench fills a critical gap in LLM evaluation by enabling rigorous assessment of complexity reasoning. Our empirical analysis reveals that state-of-the-art models, while proficient in general code generation, exhibit severe deficiencies in complexity-constrained generalization—particularly failing on unseen complexity classes, with significant performance degradation across both time and space complexity tasks. This work establishes the first standardized foundation for evaluating and advancing complexity-aware code intelligence in LLMs.
📝 Abstract
We introduce BigO(Bench), a novel coding benchmark designed to evaluate the capabilities of generative language models in understanding and generating code with specified time and space complexities. This benchmark addresses the gap in current evaluations that often overlook the ability of models to comprehend and produce code constrained by computational complexity. BigO(Bench) includes tooling to infer the algorithmic complexity of any Python function from profiling measurements, including human- or LLM-generated solutions. BigO(Bench) also includes of set of 3,105 coding problems and 1,190,250 solutions from Code Contests annotated with inferred (synthetic) time and space complexity labels from the complexity framework, as well as corresponding runtime and memory footprint values for a large set of input sizes. We present results from evaluating multiple state-of-the-art language models on this benchmark, highlighting their strengths and weaknesses in handling complexity requirements. In particular, token-space reasoning models are unrivaled in code generation but not in complexity understanding, hinting that they may not generalize well to tasks for which no reward was given at training time.