Where Do AI Coding Agents Fail? An Empirical Study of Failed Agentic Pull Requests in GitHub

📅 2026-01-21
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high rejection rate of pull requests (PRs) submitted by AI coding agents on GitHub, a phenomenon whose underlying causes remain poorly understood. Through a large-scale empirical analysis of 33,000 AI-generated PRs, combining data mining, quantitative statistics, and manual qualitative coding, the work presents the first hierarchical taxonomy of reasons for PR rejection. The findings uncover socio-technical factors—such as insufficient reviewer engagement and misalignment with project goals—that are often invisible to conventional metrics. PRs related to documentation, continuous integration (CI), and build tasks exhibit the highest merge rates, whereas those targeting performance optimization and bug fixes are least likely to be accepted. Unmerged PRs typically involve larger code changes, affect more files, and frequently fail CI validation.

Technology Category

Application Category

📝 Abstract
AI coding agents are now submitting pull requests (PRs) to software projects, acting not just as assistants but as autonomous contributors. As these agentic contributions are rapidly increasing across real repositories, little is known about how they behave in practice and why many of them fail to be merged. In this paper, we conduct a large-scale study of 33k agent-authored PRs made by five coding agents across GitHub. (RQ1) We first quantitatively characterize merged and not-merged PRs along four broad dimensions: 1) merge outcomes across task types, 2) code changes, 3) CI build results, and 4) review dynamics. We observe that tasks related to documentation, CI, and build update achieve the highest merge success, whereas performance and bug-fix tasks perform the worst. Not-merged PRs tend to involve larger code changes, touch more files, and often do not pass the project's CI/CD pipeline validation. (RQ2) To further investigate why some agentic PRs are not merged, we qualitatively analyze 600 PRs to derive a hierarchical taxonomy of rejection patterns. This analysis complements the quantitative findings in RQ1 by uncovering rejection reasons not captured by quantitative metrics, including lack of meaningful reviewer engagement, duplicate PRs, unwanted feature implementations, and agent misalignment. Together, our findings highlight key socio-technical and human-AI collaboration factors that are critical to improving the success of future agentic workflows.
Problem

Research questions and friction points this paper is trying to address.

AI coding agents
pull requests
merge failure
GitHub
code contribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI coding agents
pull request failure
empirical study
human-AI collaboration
rejection taxonomy
🔎 Similar Papers
No similar papers found.