Towards Human-interpretable Explanation in Code Clone Detection using LLM-based Post Hoc Explainer

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While state-of-the-art machine learning–based code clone detectors (e.g., GraphCodeBERT) achieve high accuracy in semantic clone detection, they lack interpretability; existing post-hoc explanation methods either require white-box model access or incur prohibitive computational overhead. Method: We propose a zero-shot, model-agnostic, post-hoc explanation framework leveraging large language models (specifically ChatGPT-4). It generates natural-language explanations solely from the detector’s binary output and the original code pair, exploiting in-context learning without accessing internal model parameters—ensuring low computational cost. Zero-temperature sampling is employed to maximize explanation consistency. Contribution/Results: Empirical evaluation shows 98% of generated explanations are logically correct and 95% meet high-quality standards, significantly outperforming conventional post-hoc methods. Our approach enhances developer understanding and trust in clone detection decisions while preserving detector integrity and efficiency.

Technology Category

Application Category

📝 Abstract
Recent studies highlight various machine learning (ML)-based techniques for code clone detection, which can be integrated into developer tools such as static code analysis. With the advancements brought by ML in code understanding, ML-based code clone detectors could accurately identify and classify cloned pairs, especially semantic clones, but often operate as black boxes, providing little insight into the decision-making process. Post hoc explainers, on the other hand, aim to interpret and explain the predictions of these ML models after they are made, offering a way to understand the underlying mechanisms driving the model's decisions. However, current post hoc techniques require white-box access to the ML model or are computationally expensive, indicating a need for advanced post hoc explainers. In this paper, we propose a novel approach that leverages the in-context learning capabilities of large language models to elucidate the predictions made by the ML-based code clone detectors. We perform a study using ChatGPT-4 to explain the code clone results inferred by GraphCodeBERT. We found that our approach is promising as a post hoc explainer by giving the correct explanations up to 98% and offering good explanations 95% of the time. However, the explanations and the code line examples given by the LLM are useful in some cases. We also found that lowering the temperature to zero helps increase the accuracy of the explanation. Lastly, we list the insights that can lead to further improvements in future work. This study paves the way for future studies in using LLMs as a post hoc explainer for various software engineering tasks.
Problem

Research questions and friction points this paper is trying to address.

Explaining black-box ML code clone detection predictions interpretably
Overcoming computational expense and white-box access requirements
Providing human-understandable explanations for semantic clone identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large language models for post hoc explanation
Leverages ChatGPT-4 to explain GraphCodeBERT predictions
Achieves high accuracy by setting temperature to zero
🔎 Similar Papers
No similar papers found.
Teeradaj Racharak
Teeradaj Racharak
Tohoku University
Description LogicArgumentationMachine LearningNeural-SymbolicArtificial Intelligence
Chaiyong Ragkhitwetsagul
Chaiyong Ragkhitwetsagul
Assistant Professor, Faculty of ICT, Mahidol University
Software EngineeringMining Software RepositoriesCode SimilarityEmpirical Studies
C
Chayanee Junplong
Faculty of Information and Communication Technology, Mahidol University, Thailand
A
Akara Supratak
Faculty of Information and Communication Technology, Mahidol University, Thailand