Automated Type Annotation in Python Using Large Language Models

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manual Python type annotation is error-prone and time-consuming, while conventional automated approaches suffer from limited type vocabulary coverage, coarse-grained behavioral modeling, and heavy reliance on large-scale annotated data. Method: This paper proposes a fine-tuning-free, large language model (LLM)-driven iterative framework—Generate-Check-Refine—that integrates Concrete Syntax Tree (CST)-guided syntactic constraints with Mypy static type-checking feedback to enable fine-grained, context-aware type inference. Contribution/Results: We present the first systematic zero-shot evaluation of general-purpose and reasoning-optimized models—including GPT-4oMini, GPT-4.1mini, O3Mini, and O4Mini—for type inference. On 6,000 real-world code snippets, our approach achieves 88.6% type consistency and 70.5% exact-match accuracy, with an average of fewer than one refinement iteration per snippet—matching the performance of supervised deep learning methods requiring extensive labeled data.

Technology Category

Application Category

📝 Abstract
Type annotations in Python enhance maintainability and error detection. However, generating these annotations manually is error prone and requires extra effort. Traditional automation approaches like static analysis, machine learning, and deep learning struggle with limited type vocabularies, behavioral over approximation, and reliance on large labeled datasets. In this work, we explore the use of LLMs for generating type annotations in Python. We develop a generate check repair pipeline: the LLM proposes annotations guided by a Concrete Syntax Tree representation, a static type checker (Mypy) verifies them, and any errors are fed back for iterative refinement. We evaluate four LLM variants: GPT 4oMini, GPT 4.1mini (general-purpose), and O3Mini, O4Mini (reasoning optimized), on 6000 code snippets from the ManyTypes4Py benchmark. We first measure the proportion of code snippets annotated by LLMs for which MyPy reported no errors (i.e., consistent results): GPT 4oMini achieved consistency on 65.9% of cases (34.1% inconsistent), while GPT 4.1mini, O3Mini, and O4Mini each reached approximately 88.6% consistency (around 11.4% failures). To measure annotation quality, we then compute exact-match and base-type match accuracies over all 6000 snippets: GPT 4.1mini and O3Mini perform the best, achieving up to 70.5% exact match and 79.1% base type accuracy, requiring under one repair iteration on average. Our results demonstrate that general-purpose and reasoning optimized LLMs, without any task specific fine tuning or additional training can be effective in generating consistent type annotations.They perform competitively with traditional deep learning techniques which require large labeled dataset for training. While our work focuses on Python, the pipeline can be extended to other optionally typed imperative languages like Ruby
Problem

Research questions and friction points this paper is trying to address.

Automating Python type annotations using LLMs
Overcoming limitations of traditional annotation methods
Evaluating LLM performance on type consistency and accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based generate-check-repair pipeline
Concrete Syntax Tree guides annotation
Static type checker verifies annotations
🔎 Similar Papers
No similar papers found.
V
Varun Bharti
IIIT Delhi, Delhi, India
S
Shashwat Jha
IIIT Delhi, Delhi, India
D
Dhruv Kumar
BITS Pilani, Pilani, India
Pankaj Jalote
Pankaj Jalote
IIIT-Delhi
Software Engineering