Revisiting Method-Level Change Prediction: A Comparative Evaluation at Different Granularities

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the granularity trade-off between method-level and class-level change prediction. Using historical change data from 15 open-source projects, we construct a multi-faceted evaluation framework assessing (i) direct predictive performance, (ii) method-granularity alignment, and (iii) maintenance cost awareness. Our systematic empirical analysis reveals that method-level prediction achieves a median accuracy improvement of 0.26 over class-level prediction under method-granularity alignment—particularly pronounced under low maintenance effort constraints. Crucially, the study clarifies the practical applicability boundary of granularity selection: method-level prediction is not universally superior, but demonstrates statistically significant advantages specifically in dimensions aligned with real-world maintenance practices—most notably when prediction targets match actual maintenance units (i.e., methods). These findings provide empirically grounded guidance for granularity selection in change prediction models, bridging the gap between theoretical model design and operational maintenance requirements.

Technology Category

Application Category

📝 Abstract
To improve the efficiency of software maintenance, change prediction techniques have been proposed to predict frequently changing modules. Whereas existing techniques focus primarily on class-level prediction, method-level prediction allows for more direct identification of change locations. Method-level prediction can be useful, but it may also negatively affect prediction performance, leading to a trade-off. This makes it unclear which level of granularity users should select for their predictions. In this paper, we evaluated the performance of method-level change prediction compared with that of class-level prediction from three perspectives: direct comparison, method-level comparison, and maintenance effort-aware comparison. The results from 15 open source projects show that, although method-level prediction exhibited lower performance than class-level prediction in the direct comparison, method-level prediction outperformed class-level prediction when both were evaluated at method-level, leading to a median difference of 0.26 in accuracy. Furthermore, effort-aware comparison shows that method-level prediction performed significantly better when the acceptable maintenance effort is little.
Problem

Research questions and friction points this paper is trying to address.

Evaluating method-level change prediction efficiency
Comparing granularities for software maintenance predictions
Assessing trade-offs in method vs class-level prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Method-level change prediction
Comparative performance evaluation
Maintenance effort-aware analysis
🔎 Similar Papers
No similar papers found.
H
Hiroto Sugimori
School of Computing, Institute of Science Tokyo, Meguro-ku, Tokyo 152–8550, Japan
Shinpei Hayashi
Shinpei Hayashi
Institute of Science Tokyo
Software EngineeringRefactoringSoftware EvolutionSoftware Maintenance