SWE-MERA: A Dynamic Benchmark for Agenticly Evaluating Large Language Models on Software Engineering Tasks

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation benchmarks for software engineering (e.g., SWE-bench) suffer from data contamination—particularly solution leakage—and inadequate test coverage, leading to biased and unreliable assessments. To address these issues, we propose the first continuously evolving, dynamic evaluation framework: it automatically collects real-world GitHub Issues via the GitHub API, applies multi-stage filtering and human verification to ensure task quality, and leverages intelligent agents (e.g., Aider) for automated, reproducible evaluation—significantly mitigating data leakage and enhancing timeliness. We construct ~10,000 candidate tasks and publicly release 300 high-quality, contamination-free instances. Evaluations across 10+ state-of-the-art LLMs demonstrate strong discriminative power, effectively exposing nuanced capability differences in realistic coding scenarios. This benchmark enables long-term, fair, and reliable assessment of LLMs’ software engineering proficiency.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Large Language Models (LLMs) in software engineering has revealed critical limitations in existing benchmarks, particularly the widely used SWE-bench dataset. Recent studies have uncovered severe data contamination issues, e.g. SWE-bench reports 32.67% of successful patches involve direct solution leakage and 31.08% pass due to inadequate test cases. We introduce SWE-MERA, a dynamic, continuously updated benchmark designed to address these fundamental challenges through an automated collection of real-world GitHub issues and rigorous quality validation. Our approach implements a reliable pipeline that ensures quality while minimizing contamination risks, resulting in approximately 10,000 potential tasks with 300 samples currently available. Evaluation using the Aider coding agent demonstrates strong discriminative power in state-of-the-art models. We report performance across a dozen recent LLMs evaluated on tasks collected between September 2024 and June 2025.
Problem

Research questions and friction points this paper is trying to address.

Address data contamination in software engineering benchmarks
Provide dynamic evaluation for LLMs on real-world GitHub issues
Ensure benchmark quality with automated validation and minimal leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic benchmark for LLM evaluation
Automated GitHub issue collection
Rigorous quality validation pipeline
🔎 Similar Papers
No similar papers found.