🤖 AI Summary
This study addresses the growing ambiguity in code authorship on GitHub due to AI coding agents submitting pull requests as developers, which undermines repository governance and research reproducibility. For the first time, it demonstrates that mainstream AI coding agents—such as Codex and Claude Code—exhibit distinct, identifiable behavioral patterns. By analyzing 33,580 pull requests, the authors construct a 41-dimensional behavioral fingerprint that integrates commit metadata, pull request structure, and code-level features into a multidimensional model. The proposed approach achieves a 97.2% F1 score across five AI agent identification tasks, confirming that AI-generated code possesses stable and discriminative behavioral signatures. This work establishes a novel paradigm for tracing AI-authored code and enhancing platform governance in software ecosystems increasingly shaped by generative AI.
📝 Abstract
AI coding agents are reshaping software development through both autonomous and human-mediated pull requests (PRs). When developers use AI agents to generate code under their own accounts, code authorship attribution becomes critical for repository governance, research validity, and understanding modern development practices. We present the first study on fingerprinting AI coding agents, analyzing 33,580 PRs from five major agents (OpenAI Codex, GitHub Copilot, Devin, Cursor, Claude Code) to identify behavioral signatures. With 41 features spanning commit messages, PR structure, and code characteristics, we achieve 97.2% F1-score in multi-class agent identification. We uncover distinct fingerprints: Codex shows unique multiline commit patterns (67.5% feature importance), and Claude Code exhibits distinctive code structure (27.2% importance of conditional statements). These signatures reveal that AI coding tools produce detectable behavioral patterns, suggesting potential for identifying AI contributions in software repositories.