LLM4SZZ: Enhancing SZZ Algorithm with Context-Enhanced Assessment on Large Language Models

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
SZZ algorithms are the mainstream approach for identifying bug-introducing commits, yet existing enhancements largely rely on static heuristics or monolingual deep models, neglecting semantic cues from commit messages and patch context, and remain constrained by the restrictive “deleted-line” assumption—limiting performance gains. This paper proposes the first large language model (LLM)-based dual-path SZZ framework, integrating ranking-based identification with context-enhanced identification. Leveraging semantic-aware prompt engineering, the framework dynamically selects the optimal strategy; critically, it is the first to unify multi-source contextual signals—including commit messages, diff context, and hierarchically ranked root-cause statements—into a cohesive representation. Evaluated on three standard benchmarks, our method achieves absolute F1-score improvements of 6.9–16.0% over all baselines, while maintaining high recall. It thus overcomes longstanding limitations in both cross-language generalization and contextual modeling for defect attribution.

Technology Category

Application Category

📝 Abstract
The SZZ algorithm is the dominant technique for identifying bug-inducing commits and serves as a foundation for many software engineering studies, such as bug prediction and static code analysis. Researchers have proposed many variants to enhance the SZZ algorithm's performance since its introduction. The majority of them rely on static techniques or heuristic assumptions, making them easy to implement, but their performance improvements are often limited. Recently, a deep learning-based SZZ algorithm has been introduced to enhance the original SZZ algorithm. However, it requires complex preprocessing and is restricted to a single programming language. Additionally, while it enhances precision, it sacrifices recall. Furthermore, most of variants overlook crucial information, such as commit messages and patch context, and are limited to bug-fixing commits involving deleted lines. The emergence of large language models (LLMs) offers an opportunity to address these drawbacks. In this study, we investigate the strengths and limitations of LLMs and propose LLM4SZZ, which employs two approaches (i.e., rank-based identification and context-enhanced identification) to handle different types of bug-fixing commits. We determine which approach to adopt based on the LLM's ability to comprehend the bug and identify whether the bug is present in a commit. The context-enhanced identification provides the LLM with more context and requires it to find the bug-inducing commit among a set of candidate commits. In rank-based identification, we ask the LLM to select buggy statements from the bug-fixing commit and rank them based on their relevance to the root cause. Experimental results show that LLM4SZZ outperforms all baselines across three datasets, improving F1-score by 6.9% to 16.0% without significantly sacrificing recall.
Problem

Research questions and friction points this paper is trying to address.

Enhancing SZZ algorithm for bug-inducing commit identification
Addressing limitations of static techniques and heuristic assumptions
Leveraging LLMs to improve precision without sacrificing recall
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large language models for SZZ enhancement
Implements rank-based and context-enhanced identification
Improves F1-score significantly without sacrificing recall
🔎 Similar Papers
No similar papers found.