🤖 AI Summary
Existing LLM evaluation benchmarks for software engineering (e.g., SWE-bench) suffer from data contamination—particularly solution leakage—and inadequate test coverage, leading to biased and unreliable assessments. To address these issues, we propose the first continuously evolving, dynamic evaluation framework: it automatically collects real-world GitHub Issues via the GitHub API, applies multi-stage filtering and human verification to ensure task quality, and leverages intelligent agents (e.g., Aider) for automated, reproducible evaluation—significantly mitigating data leakage and enhancing timeliness. We construct ~10,000 candidate tasks and publicly release 300 high-quality, contamination-free instances. Evaluations across 10+ state-of-the-art LLMs demonstrate strong discriminative power, effectively exposing nuanced capability differences in realistic coding scenarios. This benchmark enables long-term, fair, and reliable assessment of LLMs’ software engineering proficiency.
📝 Abstract
The rapid advancement of Large Language Models (LLMs) in software engineering has revealed critical limitations in existing benchmarks, particularly the widely used SWE-bench dataset. Recent studies have uncovered severe data contamination issues, e.g. SWE-bench reports 32.67% of successful patches involve direct solution leakage and 31.08% pass due to inadequate test cases. We introduce SWE-MERA, a dynamic, continuously updated benchmark designed to address these fundamental challenges through an automated collection of real-world GitHub issues and rigorous quality validation. Our approach implements a reliable pipeline that ensures quality while minimizing contamination risks, resulting in approximately 10,000 potential tasks with 300 samples currently available. Evaluation using the Aider coding agent demonstrates strong discriminative power in state-of-the-art models. We report performance across a dozen recent LLMs evaluated on tasks collected between September 2024 and June 2025.