From Code Changes to Quality Gains: An Empirical Study in Python ML Systems with PyQu

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A long-standing gap exists in Python machine learning systems (MLS) between code changes and measurable software quality improvements, hindering quality-driven development practices. Method: This paper introduces the first large-scale empirical framework for MLS, proposing a context-aware taxonomy of 13 code change types and developing PyQu—a tool that integrates low-level software metrics, ML-specific indicators, and topic modeling to automatically identify quality-enhancing commits. Contribution/Results: Evaluated on 3,340 open-source projects and 3.7 million commits, PyQu achieves an average F1-score of 0.85. It discovers 41% more quality-improving changes than prior approaches and precisely identifies 61 representative modifications with statistically significant quality gains. The study fills a critical empirical void in establishing causal links between code changes and quality outcomes in MLS, delivering an interpretable, reusable methodological foundation for intelligent quality assurance.

Technology Category

Application Category

📝 Abstract
In an era shaped by Generative Artificial Intelligence for code generation and the rising adoption of Python-based Machine Learning systems (MLS), software quality has emerged as a major concern. As these systems grow in complexity and importance, a key obstacle lies in understanding exactly how specific code changes affect overall quality-a shortfall aggravated by the lack of quality assessment tools and a clear mapping between ML systems code changes and their quality effects. Although prior work has explored code changes in MLS, it mostly stops at what the changes are, leaving a gap in our knowledge of the relationship between code changes and the MLS quality. To address this gap, we conducted a large-scale empirical study of 3,340 open-source Python ML projects, encompassing more than 3.7 million commits and 2.7 trillion lines of code. We introduce PyQu, a novel tool that leverages low level software metrics to identify quality-enhancing commits with an average accuracy, precision, and recall of 0.84 and 0.85 of average F1 score. Using PyQu and a thematic analysis, we identified 61 code changes, each demonstrating a direct impact on enhancing software quality, and we classified them into 13 categories based on contextual characteristics. 41% of the changes are newly discovered by our study and have not been identified by state-of-the-art Python changes detection tools. Our work offers a vital foundation for researchers, practitioners, educators, and tool developers, advancing the quest for automated quality assessment and best practices in Python-based ML software.
Problem

Research questions and friction points this paper is trying to address.

Mapping code changes to quality effects in Python ML systems
Addressing the lack of quality assessment tools for ML systems
Identifying quality-enhancing code changes through empirical analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

PyQu tool uses software metrics for quality assessment
Identifies code changes directly impacting software quality
Classifies changes into categories with contextual characteristics
🔎 Similar Papers
No similar papers found.