Selecting and Combining Large Language Models for Scalable Code Clone Detection

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenges of detecting dispersed code clones and poor scalability in large-scale code clone detection. We propose a model-selection criterion for large language models (LLMs) based on intrinsic characteristics—namely, embedding dimensionality and vocabulary size—and design an ensemble strategy combining normalized scoring with weighted fusion. Experiments are conducted using CodeT5+110M, CuBERT, and SPTCode on BigCloneBench and a real-world industrial-scale dataset. The best single model achieves 39.71% precision—doubling CodeBERT’s performance—while the ensemble reaches 46.91%. Our key contributions are: (1) the first systematic framework for LLM selection tailored to code clone detection; (2) empirical validation that normalized scoring combined with max-based fusion significantly outperforms average-based fusion; and (3) demonstration of the efficacy and scalability of LLM ensembles in authentic industrial settings.

Technology Category

Application Category

📝 Abstract
Source code clones pose risks ranging from intellectual property violations to unintended vulnerabilities. Effective and efficient scalable clone detection, especially for diverged clones, remains challenging. Large language models (LLMs) have recently been applied to clone detection tasks. However, the rapid emergence of LLMs raises questions about optimal model selection and potential LLM-ensemble efficacy. This paper addresses the first question by identifying 76 LLMs and filtering them down to suitable candidates for large-scale clone detection. The candidates were evaluated on two public industrial datasets, BigCloneBench, and a commercial large-scale dataset. No uniformly 'best-LLM' emerged, though CodeT5+110M, CuBERT and SPTCode were top-performers. Analysis of LLM-candidates suggested that smaller embedding sizes, smaller tokenizer vocabularies and tailored datasets are advantageous. On commercial large-scale dataset a top-performing CodeT5+110M achieved 39.71% precision: twice the precision of previously used CodeBERT. To address the second question, this paper explores ensembling of the selected LLMs: effort-effective approach to improving effectiveness. Results suggest the importance of score normalization and favoring ensembling methods like maximum or sum over averaging. Also, findings indicate that ensembling approach can be statistically significant and effective on larger datasets: the best-performing ensemble achieved even higher precision of 46.91% over individual LLM on the commercial large-scale code.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM selection for scalable code clone detection
Evaluating ensemble methods to enhance clone detection accuracy
Addressing divergent clone detection challenges with LLM combinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selected 76 LLMs for scalable code clone detection
Evaluated models on industrial datasets to identify top performers
Ensembled top LLMs with normalization for improved precision
🔎 Similar Papers
No similar papers found.
M
Muslim Chochlov
University of Limerick, Department of Computer Science and Information Systems, Limerick, Ireland
G
Gul Aftab Ahmed
Trinity College Dublin, Department of Computer Science, Dublin, Ireland
J
James Vincent Patten
University of Limerick, Department of Computer Science and Information Systems, Limerick, Ireland
Y
Yuanhua Han
Huawei Technologies Co., Ltd. WN Digital IPD and Trustworthiness Enabling, Xi’an, Shaanxi, China
G
Guoxian Lu
Huawei Technologies Co., Ltd. WN Digital IPD and Trustworthiness Enabling, Shanghai, China
David Gregg
David Gregg
Professor in Computer Science, Lero, Trinity College Dublin
Compilerscomputer architecturelow power computingembedded machine learning
Jim Buckley
Jim Buckley
Lecturer, University of Limerick
System analysisprogrammer information seekingAI4SESE4AI