When AI Teammates Meet Code Review: Collaboration Signals Shaping the Integration of Agent-Authored Pull Requests

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the gap in understanding how AI-generated code submissions integrate into human-led code review processes. Leveraging the AIDev dataset, the authors combine logistic regression with repository-clustered standard errors and qualitative content analysis to systematically identify collaborative signals—such as reviewer engagement—that critically influence the merge success of AI-generated pull requests. The findings reveal that active reviewer participation significantly increases the likelihood of integration, whereas disruptive behaviors like large-scale changes or force pushes reduce merge probability. These results demonstrate that effective AI collaboration hinges on alignment with established review norms and the formation of a convergent feedback loop, thereby moving beyond prior work that focused narrowly on code quality or iteration frequency.

Technology Category

Application Category

📝 Abstract
Autonomous coding agents increasingly contribute to software development by submitting pull requests on GitHub; yet, little is known about how these contributions integrate into human-driven review workflows. We present a large empirical study of agent-authored pull requests using the public AIDev dataset, examining integration outcomes, resolution speed, and review-time collaboration signals. Using logistic regression with repository-clustered standard errors, we find that reviewer engagement has the strongest correlation with successful integration, whereas larger change sizes and coordination-disrupting actions, such as force pushes, are associated with a lower likelihood of merging. In contrast, iteration intensity alone provides limited explanatory power once collaboration signals are considered. A qualitative analysis further shows that successful integration occurs when agents engage in actionable review loops that converge toward reviewer expectations. Overall, our results highlight that the effective integration of agent-authored pull requests depends not only on code quality but also on alignment with established review and coordination practices.
Problem

Research questions and friction points this paper is trying to address.

AI teammates
code review
pull requests
collaboration signals
software development
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI teammates
code review
pull request integration
collaboration signals
autonomous coding agents
🔎 Similar Papers