🤖 AI Summary
This paper addresses the challenges of detecting dispersed code clones and poor scalability in large-scale code clone detection. We propose a model-selection criterion for large language models (LLMs) based on intrinsic characteristics—namely, embedding dimensionality and vocabulary size—and design an ensemble strategy combining normalized scoring with weighted fusion. Experiments are conducted using CodeT5+110M, CuBERT, and SPTCode on BigCloneBench and a real-world industrial-scale dataset. The best single model achieves 39.71% precision—doubling CodeBERT’s performance—while the ensemble reaches 46.91%. Our key contributions are: (1) the first systematic framework for LLM selection tailored to code clone detection; (2) empirical validation that normalized scoring combined with max-based fusion significantly outperforms average-based fusion; and (3) demonstration of the efficacy and scalability of LLM ensembles in authentic industrial settings.
📝 Abstract
Source code clones pose risks ranging from intellectual property violations to unintended vulnerabilities. Effective and efficient scalable clone detection, especially for diverged clones, remains challenging. Large language models (LLMs) have recently been applied to clone detection tasks. However, the rapid emergence of LLMs raises questions about optimal model selection and potential LLM-ensemble efficacy.
This paper addresses the first question by identifying 76 LLMs and filtering them down to suitable candidates for large-scale clone detection. The candidates were evaluated on two public industrial datasets, BigCloneBench, and a commercial large-scale dataset. No uniformly 'best-LLM' emerged, though CodeT5+110M, CuBERT and SPTCode were top-performers. Analysis of LLM-candidates suggested that smaller embedding sizes, smaller tokenizer vocabularies and tailored datasets are advantageous. On commercial large-scale dataset a top-performing CodeT5+110M achieved 39.71% precision: twice the precision of previously used CodeBERT.
To address the second question, this paper explores ensembling of the selected LLMs: effort-effective approach to improving effectiveness. Results suggest the importance of score normalization and favoring ensembling methods like maximum or sum over averaging. Also, findings indicate that ensembling approach can be statistically significant and effective on larger datasets: the best-performing ensemble achieved even higher precision of 46.91% over individual LLM on the commercial large-scale code.