🤖 AI Summary
Existing code generation benchmarks lack representativeness of real-world development scenarios, limiting their utility in guiding model deployment and optimization. This work proposes the first multilingual, multitask evaluation benchmark grounded in real developer telemetry data, spanning six programming languages and six canonical task types, with a strong emphasis on ecological validity and absence of data contamination. It introduces a multidimensional evaluation framework combining functional correctness testing, code similarity analysis, and LLM-as-a-judge to enable fine-grained diagnostics and context-aware performance assessment. Systematic evaluation of nine state-of-the-art models reveals substantial disparities in syntactic accuracy, semantic reasoning capabilities, and practical utility, offering actionable empirical insights for model selection and improvement.
📝 Abstract
DevBench is a telemetry-driven benchmark designed to evaluate Large Language Models (LLMs) on realistic code completion tasks. It includes 1,800 evaluation instances across six programming languages and six task categories derived from real developer telemetry, such as API usage and code purpose understanding. Unlike prior benchmarks, it emphasizes ecological validity, avoids training data contamination, and enables detailed diagnostics. The evaluation combines functional correctness, similarity-based metrics, and LLM-judge assessments focused on usefulness and contextual relevance. 9 state-of-the-art models were assessed, revealing differences in syntactic precision, semantic reasoning, and practical utility. Our benchmark provides actionable insights to guide model selection and improvement-detail that is often missing from other benchmarks but is essential for both practical deployment and targeted model development.