Can LLMs Replace Humans During Code Chunking?

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modernizing government legacy systems—particularly those written in low-resource, obsolete languages such as MUMPS and assembly language (ALC)—faces significant challenges due to ultra-long code contexts exceeding standard LLM context windows and high costs/inconsistency of manual code chunking. Method: We propose an autonomous, multi-model approach leveraging GPT-4o, Claude 3 Sonnet, Mixtral, and Llama 3 to perform semantic code chunking and generate module-level annotations for such legacy code. Contribution/Results: Experiments demonstrate that LLMs accurately identify semantic segmentation points aligned with domain experts’ judgments. The generated module annotations achieve 20% higher factual accuracy and 10% greater practical utility than human-authored counterparts. This work provides the first empirical validation of LLMs’ feasibility and superiority in understanding legacy code and generating structured documentation—establishing an efficient, scalable technical pathway for large-scale legacy system modernization.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have become essential tools in computer science, especially for tasks involving code understanding and generation. However, existing work does not address many of the unique challenges presented by code written for government applications. In particular, government enterprise software is often written in legacy languages like MUMPS or assembly language code (ALC) and the overall token lengths of these systems exceed the context window size for current commercially available LLMs. Additionally, LLMs are primarily trained on modern software languages and have undergone limited testing with legacy languages, making their ability to understand legacy languages unknown and, hence, an area for empirical study. This paper examines the application of LLMs in the modernization of legacy government code written in ALC and MUMPS, addressing the challenges of input limitations. We investigate various code-chunking methods to optimize the generation of summary module comments for legacy code files, evaluating the impact of code-chunking methods on the quality of documentation produced by different LLMs, including GPT-4o, Claude 3 Sonnet, Mixtral, and Llama 3. Our results indicate that LLMs can select partition points closely aligned with human expert partitioning. We also find that chunking approaches have significant impact on downstream tasks such as documentation generation. LLM-created partitions produce comments that are up to 20% more factual and up to 10% more useful than when humans create partitions. Therefore, we conclude that LLMs can be used as suitable replacements for human partitioning of large codebases during LLM-aided modernization.
Problem

Research questions and friction points this paper is trying to address.

Can LLMs handle legacy government code chunking?
Do LLMs understand legacy languages like MUMPS and ALC?
How does code-chunking affect documentation quality?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizing code-chunking for legacy languages
Evaluating LLMs on MUMPS and ALC
LLMs improve documentation quality significantly
🔎 Similar Papers
No similar papers found.
C
Christopher Glasz
The MITRE Corporation, McLean, VA
E
Emily Escamilla
The MITRE Corporation, McLean, VA
E
Eric O. Scott
The MITRE Corporation, McLean, VA
A
Anand Patel
The MITRE Corporation, McLean, VA
Jacob Zimmer
Jacob Zimmer
Lead AI Engineer, The MITRE Corporation
C
Colin Diggs
The MITRE Corporation, McLean, VA
Michael Doyle
Michael Doyle
Lead AI Research Engineer, MITRE
machine learningcomputer visionnatural language processinglarge language models
Scott Rosen
Scott Rosen
The MITRE Corporation
Simulation OptimizationSimulation MetamodelingDecision Analysis
Nitin Naik
Nitin Naik
The MITRE Corporation, McLean, VA
Justin F. Brunelle
Justin F. Brunelle
Chief Scientist, Software Engineering Innovation Center, The MITRE Corporation
Web ScienceWeb-based Info RetrievalInnovationsoftware engineeringAI-enabled software enginee
S
Samruddhi Thaker
The MITRE Corporation, McLean, VA
P
Parthav Poudel
The MITRE Corporation, McLean, VA
A
Arun Sridharan
The MITRE Corporation, McLean, VA
A
Amit Madan
The MITRE Corporation, McLean, VA
D
Doug Wendt
The MITRE Corporation, McLean, VA
William Macke
William Macke
Unknown affiliation
AIMachine Learning
T
Thomas Schill
The MITRE Corporation, McLean, VA