🤖 AI Summary
To address the low detection accuracy and poor interpretability of conventional approaches for identifying typical smart contract vulnerabilities—such as reentrancy and integer overflows—this paper proposes a novel hybrid detection framework integrating large language models (LLMs) with supervised machine learning. We conduct the first systematic comparative evaluation of fine-tuned LLMs (e.g., DistilBERT) versus traditional ML models on multi-class smart contract vulnerability classification, empirically demonstrating LLMs’ superior capability in capturing fine-grained code semantics. On a manually annotated dataset, our fine-tuned LLM achieves over 90% classification accuracy—substantially outperforming state-of-the-art methods and establishing new benchmarks on mainstream evaluation sets. This work significantly enhances both detection accuracy and model interpretability, while introducing an efficient, scalable technical paradigm for blockchain smart contract security analysis.
📝 Abstract
As blockchain technology and smart contracts become widely adopted, securing them throughout every stage of the transaction process is essential. The concern of improved security for smart contracts is to find and detect vulnerabilities using classical Machine Learning (ML) models and fine-tuned Large Language Models (LLM). The robustness of such work rests on a labeled smart contract dataset that includes annotated vulnerabilities on which several LLMs alongside various traditional machine learning algorithms such as DistilBERT model is trained and tested. We train and test machine learning algorithms to classify smart contract codes according to vulnerability types in order to compare model performance. Having fine-tuned the LLMs specifically for smart contract code classification should help in getting better results when detecting several types of well-known vulnerabilities, such as Reentrancy, Integer Overflow, Timestamp Dependency and Dangerous Delegatecall. From our initial experimental results, it can be seen that our fine-tuned LLM surpasses the accuracy of any other model by achieving an accuracy of over 90%, and this advances the existing vulnerability detection benchmarks. Such performance provides a great deal of evidence for LLMs ability to describe the subtle patterns in the code that traditional ML models could miss. Thus, we compared each of the ML and LLM models to give a good overview of each models strengths, from which we can choose the most effective one for real-world applications in smart contract security. Our research combines machine learning and large language models to provide a rich and interpretable framework for detecting different smart contract vulnerabilities, which lays a foundation for a more secure blockchain ecosystem.