Benchmarking and Studying the LLM-based Code Review

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing ACR benchmarks suffer from insufficient real-world project context, overreliance on fine-grained unit-level tasks, and narrow evaluation metrics, limiting their ability to assess LLMs’ practical code review (ACR) capabilities. This paper introduces SWRBench—the first PR-centric, full-project-context ACR benchmark—comprising 1,000 manually validated GitHub pull requests. We propose an LLM-based objective evaluation method achieving high agreement with human judgments (Cohen’s κ = 0.82). Furthermore, we empirically demonstrate for the first time that multi-review aggregation significantly improves performance, boosting F1 scores by up to 43.67%. Experiments reveal that current LLMs excel at detecting functional bugs but underperform on stylistic and compliance-related issues. Our structured ground-truth construction and semantic coverage assessment enable reproducible, scalable ACR research.

Technology Category

Application Category

📝 Abstract
Automated Code Review (ACR) is crucial for software quality, yet existing benchmarks often fail to reflect real-world complexities, hindering the evaluation of modern Large Language Models (LLMs). Current benchmarks frequently focus on fine-grained code units, lack complete project context, and use inadequate evaluation metrics. To address these limitations, we introduce SWRBench , a new benchmark comprising 1000 manually verified Pull Requests (PRs) from GitHub, offering PR-centric review with full project context. SWRBench employs an objective LLM-based evaluation method that aligns strongly with human judgment (~90 agreement) by verifying if issues from a structured ground truth are covered in generated reviews. Our systematic evaluation of mainstream ACR tools and LLMs on SWRBench reveals that current systems underperform, and ACR tools are more adept at detecting functional errors. Subsequently, we propose and validate a simple multi-review aggregation strategy that significantly boosts ACR performance, increasing F1 scores by up to 43.67%. Our contributions include the SWRBench benchmark, its objective evaluation method, a comprehensive study of current ACR capabilities, and an effective enhancement approach, offering valuable insights for advancing ACR research.
Problem

Research questions and friction points this paper is trying to address.

Existing benchmarks fail to reflect real-world code review complexities
Current evaluation methods lack complete project context and adequate metrics
Modern LLM-based code review systems underperform in comprehensive evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

SWRBench benchmark with 1000 verified GitHub pull requests
Objective LLM evaluation method with 90% human agreement
Multi-review aggregation strategy boosting F1 scores by 43.67%
Zhengran Zeng
Zhengran Zeng
Peking University
Software EngineeringLLM4Code
R
Ruikai Shi
Peking University, Beijing, China
K
Keke Han
Peking University, Beijing, China
Yixin Li
Yixin Li
Stony Brook University
PET InstrumentMedical ImagingX-ray Imaging
K
Kaicheng Sun
Northwestern Polytechnical University, Xian, China
Y
Yidong Wang
Peking University, Beijing, China
Zhuohao Yu
Zhuohao Yu
Peking University
Natural Language ProcessingSoftware Engineering
R
Rui Xie
Peking University, Beijing, China
W
Wei Ye
Peking University, Beijing, China
Shikun Zhang
Shikun Zhang
北京大学