🤖 AI Summary
Senior developers face excessive cognitive load during code review, struggling to balance feature development with quality assurance. Method: This paper proposes a lightweight automated code review approach integrating large language models (LLMs) with static program analysis. Static analysis extracts structural features and potential defects, which guide the LLM to perform semantic-level understanding and generate context-sensitive, human-readable review comments—enabling dual verification of code structure and semantics. Contribution/Results: Compared to standalone LLM- or static-analysis-based approaches, our method significantly improves review accuracy and interpretability while reducing false positives. Preliminary industrial evaluation demonstrates that the tool effectively helps developers prioritize high-value review tasks, achieving a 78% comment adoption rate. Positive feedback from senior engineers confirms its feasibility and practical utility in real-world development workflows.
📝 Abstract
Code review is one of the primary means of assuring the quality of released software along with testing and static analysis. However, code review requires experienced developers who may not always have the time to perform an in-depth review of code. Thus, automating code review can help alleviate the cognitive burden on experienced software developers allowing them to focus on their primary activities of writing code to add new features and fix bugs. In this paper, we describe our experience in using Large Language Models towards automating the code review process in Ericsson. We describe the development of a lightweight tool using LLMs and static program analysis. We then describe our preliminary experiments with experienced developers in evaluating our code review tool and the encouraging results.