Self-Admitted Technical Debt in LLM Software: An Empirical Comparison with ML and Non-ML Software

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic understanding of self-admitted technical debt (SATD) in large language model (LLM) software, particularly in comparison with traditional machine learning (ML) and non-ML systems. Through an empirical analysis of 477 open-source repositories—combining large-scale code mining, survival analysis, and qualitative coding—the work identifies three LLM-specific SATD categories: model-stack workaround debt, model dependency debt, and performance optimization debt. Findings reveal that LLM projects exhibit a SATD incidence rate of 3.95%, comparable to ML projects (4.10%), yet their median debt-free period is 492 days—2.4 times longer than that of ML projects. Moreover, these LLM-specific debts are significantly concentrated in particular development phases, uncovering distinct evolutionary patterns of technical debt in LLM development.

Technology Category

Application Category

📝 Abstract
Self-admitted technical debt (SATD), referring to comments flagged by developers that explicitly acknowledge suboptimal code or incomplete functionality, has received extensive attention in machine learning (ML) and traditional (Non-ML) software. However, little is known about how SATD manifests and evolves in contemporary Large Language Model (LLM)-based systems, whose architectures, workflows, and dependencies differ fundamentally from both traditional and pre-LLM ML software. In this paper, we conduct the first empirical study of SATD in the LLM era, replicating and extending prior work on ML technical debt to modern LLM-based systems. We compare SATD prevalence across LLM, ML, and non-ML repositories across a total of 477 repositories (159 per category). We perform survival analysis of SATD introduction and removal to understand the dynamics of technical debt across different development paradigms. Surprisingly, despite their architectural complexity, our results reveal that LLM repositories accumulate SATD at similar rates to ML systems (3.95% vs. 4.10%). However, we observe that LLM repositories remain debt-free 2.4x longer than ML repositories (a median of 492 days vs. 204 days), and then start to accumulate technical debt rapidly. Moreover, our qualitative analysis of 377 SATD instances reveals three new forms of technical debt unique to LLM-based development that have not been reported in prior research: Model-Stack Workaround Debt, Model Dependency Debt, and Performance Optimization Debt. Finally, by mapping SATD to stages of the LLM development pipeline, we observe that debt concentrates
Problem

Research questions and friction points this paper is trying to address.

Self-Admitted Technical Debt
Large Language Model
Technical Debt
Empirical Study
Software Evolution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Admitted Technical Debt
Large Language Models
Empirical Study
Technical Debt Taxonomy
Model Dependency Debt
🔎 Similar Papers
No similar papers found.
N
Niruthiha Selvanayagam
École de technologie supérieure - ÉTS Montréal, Montréal, Canada
T
T. Ghaleb
Trent University, Peterborough, Canada
Manel Abdellatif
Manel Abdellatif
Professor - École de Technologie Supérieure, Montreal, Canada
Software EvolutionService ComputingMachine LearningTrustworthy AI