GitTaskBench: A Benchmark for Code Agents Solving Real-World Tasks Through Code Repository Leveraging

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks inadequately evaluate code agents’ ability to solve real-world tasks by leveraging large-scale code repositories (e.g., GitHub) within authentic development workflows. Method: We introduce GitTaskBench—the first repository-aware benchmark for code agents—comprising 54 end-to-end tasks across seven domains and seven modalities. We design an evaluation framework integrating human annotation with automated execution and propose alpha-value, a novel economic metric that jointly quantifies task success rate, token cost, and developer salary to assess agent ROI. Results: Experiments using state-of-the-art frameworks (e.g., OpenHands) and multiple large language models reveal that the best-performing system achieves only 48.15% task completion. Critical weaknesses are exposed in environment setup and dependency management, underscoring fundamental limitations in handling complex, multi-step software engineering workflows.

Technology Category

Application Category

📝 Abstract
Beyond scratch coding, exploiting large-scale code repositories (e.g., GitHub) for practical tasks is vital in real-world software development, yet current benchmarks rarely evaluate code agents in such authentic, workflow-driven scenarios. To bridge this gap, we introduce GitTaskBench, a benchmark designed to systematically assess this capability via 54 realistic tasks across 7 modalities and 7 domains. Each task pairs a relevant repository with an automated, human-curated evaluation harness specifying practical success criteria. Beyond measuring execution and task success, we also propose the alpha-value metric to quantify the economic benefit of agent performance, which integrates task success rates, token cost, and average developer salaries. Experiments across three state-of-the-art agent frameworks with multiple advanced LLMs show that leveraging code repositories for complex task solving remains challenging: even the best-performing system, OpenHands+Claude 3.7, solves only 48.15% of tasks. Error analysis attributes over half of failures to seemingly mundane yet critical steps like environment setup and dependency resolution, highlighting the need for more robust workflow management and increased timeout preparedness. By releasing GitTaskBench, we aim to drive progress and attention toward repository-aware code reasoning, execution, and deployment -- moving agents closer to solving complex, end-to-end real-world tasks. The benchmark and code are open-sourced at https://github.com/QuantaAlpha/GitTaskBench.
Problem

Research questions and friction points this paper is trying to address.

Evaluating code agents' ability to leverage repositories for real-world tasks
Assessing performance in workflow-driven scenarios with practical success criteria
Measuring economic benefit of agent performance through integrated metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark with realistic repository tasks and automated evaluation
Alpha-value metric combining success rates with economic costs
Open-source framework testing repository-aware code reasoning capabilities
🔎 Similar Papers
No similar papers found.