Fine-Tuning Code Language Models to Detect Cross-Language Bugs

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cross-language bugs (CLBs)—errors arising from interactions across programming languages—elude detection by conventional single-language static analysis tools. Method: We propose the first systematic CLB detection framework: (1) constructing a benchmark dataset covering three language combinations and nine interaction patterns, and developing CLCFinder, a dedicated CLB recognizer that reveals fundamental distinctions between CLBs and single-language defects; (2) conducting a comprehensive evaluation of 13 pretrained code language models, investigating the impact of fine-tuning, training data scale, sequence length, and code comments on detection performance. Results: Fine-tuning substantially improves performance, with UniXcoder-base achieving the highest F1-score of 0.7407; smaller models outperform larger ones, validating the “lightweight and efficient” paradigm; and detection accuracy consistently improves with increased training data. This work establishes a new benchmark, introduces a novel tool, and advances foundational understanding of cross-language defect detection.

Technology Category

Application Category

📝 Abstract
Multilingual programming, which involves using multiple programming languages (PLs) in a single project, is increasingly common due to its benefits. However, it introduces cross-language bugs (CLBs), which arise from interactions between different PLs and are difficult to detect by single-language bug detection tools. This paper investigates the potential of pre-trained code language models (CodeLMs) in CLB detection. We developed CLCFinder, a cross-language code identification tool, and constructed a CLB dataset involving three PL combinations (Python-C/C++, Java-C/C++, and Python-Java) with nine interaction types. We fine-tuned 13 CodeLMs on this dataset and evaluated their performance, analyzing the effects of dataset size, token sequence length, and code comments. Results show that all CodeLMs performed poorly before fine-tuning, but exhibited varying degrees of performance improvement after fine-tuning, with UniXcoder-base achieving the best F1 score (0.7407). Notably, small fine-tuned CodeLMs tended to performe better than large ones. CodeLMs fine-tuned on single-language bug datasets performed poorly on CLB detection, demonstrating the distinction between CLBs and single-language bugs. Additionally, increasing the fine-tuning dataset size significantly improved performance, while longer token sequences did not necessarily improve the model performance. The impact of code comments varied across models. Some fine-tuned CodeLMs' performance was improved, while others showed degraded performance.
Problem

Research questions and friction points this paper is trying to address.

Detecting cross-language bugs in multilingual programming projects
Evaluating fine-tuned CodeLMs for CLB detection performance
Analyzing dataset size and token sequence impact on models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned CodeLMs for cross-language bug detection
Developed CLCFinder for multilingual code identification
Constructed CLB dataset with multiple PL combinations
🔎 Similar Papers
No similar papers found.