DevBench: A Realistic, Developer-Informed Benchmark for Code Generation Models

📅 2026-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code generation benchmarks lack representativeness of real-world development scenarios, limiting their utility in guiding model deployment and optimization. This work proposes the first multilingual, multitask evaluation benchmark grounded in real developer telemetry data, spanning six programming languages and six canonical task types, with a strong emphasis on ecological validity and absence of data contamination. It introduces a multidimensional evaluation framework combining functional correctness testing, code similarity analysis, and LLM-as-a-judge to enable fine-grained diagnostics and context-aware performance assessment. Systematic evaluation of nine state-of-the-art models reveals substantial disparities in syntactic accuracy, semantic reasoning capabilities, and practical utility, offering actionable empirical insights for model selection and improvement.

Technology Category

Application Category

📝 Abstract
DevBench is a telemetry-driven benchmark designed to evaluate Large Language Models (LLMs) on realistic code completion tasks. It includes 1,800 evaluation instances across six programming languages and six task categories derived from real developer telemetry, such as API usage and code purpose understanding. Unlike prior benchmarks, it emphasizes ecological validity, avoids training data contamination, and enables detailed diagnostics. The evaluation combines functional correctness, similarity-based metrics, and LLM-judge assessments focused on usefulness and contextual relevance. 9 state-of-the-art models were assessed, revealing differences in syntactic precision, semantic reasoning, and practical utility. Our benchmark provides actionable insights to guide model selection and improvement-detail that is often missing from other benchmarks but is essential for both practical deployment and targeted model development.
Problem

Research questions and friction points this paper is trying to address.

code generation
benchmark
large language models
ecological validity
developer telemetry
Innovation

Methods, ideas, or system contributions that make the work stand out.

telemetry-driven benchmark
ecological validity
code completion
training data contamination
multi-dimensional evaluation
🔎 Similar Papers
No similar papers found.
P
Pareesa Ameneh Golnari
Microsoft
Adarsh Kumarappan
Adarsh Kumarappan
Unknown affiliation
W
Wen Wen
Microsoft
X
Xiaoyu Liu
Microsoft
G
Gabriel Ryan
Microsoft
Y
Yuting Sun
Microsoft
S
Shengyu Fu
Microsoft
E
Elsie Nallipogu
Microsoft