🤖 AI Summary
This work addresses the problem of author attribution for code generated by large language models (LLMs). It presents the first systematic, program-level provenance analysis for C code. The authors propose CodeT5-Authorship—a lightweight encoder-only model derived from CodeT5 by removing the decoder and incorporating first-token representation with a two-layer GELU classification head. They also introduce LLM-AuthorBench, the first open-source benchmark for LLM code attribution, comprising 32,000 compilation-successful C programs generated by eight mainstream LLMs. On binary classification (GPT-4.1 vs. GPT-4o) and five-way multi-class classification tasks, CodeT5-Authorship achieves 97.56% and 95.40% accuracy, respectively—substantially outperforming seven traditional machine learning baselines and eight fine-tuned Transformer variants. All models, datasets, and training scripts are publicly released.
📝 Abstract
Detecting AI-generated code, deepfakes, and other synthetic content is an emerging research challenge. As code generated by Large Language Models (LLMs) becomes more common, identifying the specific model behind each sample is increasingly important. This paper presents the first systematic study of LLM authorship attribution for C programs. We released CodeT5-Authorship, a novel model that uses only the encoder layers from the original CodeT5 encoder-decoder architecture, discarding the decoder to focus on classification. Our model's encoder output (first token) is passed through a two-layer classification head with GELU activation and dropout, producing a probability distribution over possible authors. To evaluate our approach, we introduce LLM-AuthorBench, a benchmark of 32,000 compilable C programs generated by eight state-of-the-art LLMs across diverse tasks. We compare our model to seven traditional ML classifiers and eight fine-tuned transformer models, including BERT, RoBERTa, CodeBERT, ModernBERT, DistilBERT, DeBERTa-V3, Longformer, and LoRA-fine-tuned Qwen2-1.5B. In binary classification, our model achieves 97.56% accuracy in distinguishing C programs generated by closely related models such as GPT-4.1 and GPT-4o, and 95.40% accuracy for multi-class attribution among five leading LLMs (Gemini 2.5 Flash, Claude 3.5 Haiku, GPT-4.1, Llama 3.3, and DeepSeek-V3). To support open science, we release the CodeT5-Authorship architecture, the LLM-AuthorBench benchmark, and all relevant Google Colab scripts on GitHub: https://github.com/LLMauthorbench/.