🤖 AI Summary
This study addresses the limited effectiveness of automated code review in industrial C# projects. We propose a monolingual (C#-specific) supervised fine-tuning approach to systematically enhance model performance across three core tasks: code change quality assessment, review comment generation, and code improvement suggestion. Leveraging a curated C#-dedicated dataset, we fine-tune CodeReviewer, CodeLlama-7B, and DeepSeek-R1-Distill, validating results on both enterprise codebases and public benchmarks. Our key contribution is the empirical revelation that alignment between programming language semantics and natural language representations critically governs model capability—highlighting the synergistic importance of linguistic consistency and task-specific adaptation. Experiments demonstrate that monolingual fine-tuning significantly improves output accuracy and relevance, enabling near-parity with or partial superiority over static analysis tools on routine tasks. However, performance gaps persist against human reviewers in semantically complex, context-intensive scenarios requiring deep program understanding.
📝 Abstract
Code review is essential for maintaining software quality but often time-consuming and cognitively demanding, especially in industrial environments. Recent advancements in language models (LMs) have opened new avenues for automating core review tasks. This study presents the empirical evaluation of monolingual fine-tuning on the performance of open-source LMs across three key automated code review tasks: Code Change Quality Estimation, Review Comment Generation, and Code Refinement. We fine-tuned three distinct models, CodeReviewer, CodeLlama-7B, and DeepSeek-R1-Distill, on a C# specific dataset combining public benchmarks with industrial repositories. Our study investigates how different configurations of programming languages and natural languages in the training data affect LM performance, particularly in comment generation. Additionally, we benchmark the fine-tuned models against an automated software analysis tool (ASAT) and human reviewers to evaluate their practical utility in real-world settings. Our results show that monolingual fine-tuning improves model accuracy and relevance compared to multilingual baselines. While LMs can effectively support code review workflows, especially for routine or repetitive tasks, human reviewers remain superior in handling semantically complex or context-sensitive changes. Our findings highlight the importance of language alignment and task-specific adaptation in optimizing LMs for automated code review.