LLMCup: Ranking-Enhanced Comment Updating with LLMs

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the decline in software maintainability caused by outdated comments following code changes, this paper proposes an automated comment update method leveraging large language models (LLMs). The approach introduces a novel multi-strategy prompting mechanism to generate diverse candidate comments and, for the first time, designs CupRank—a lightweight ranking model that jointly leverages BLEU, METEOR, and Sentence-BERT scores to select the optimal comment. Compared to baselines such as CUP and HebCup, our method achieves 49.0%–116.9% higher accuracy and 10.8%–20.0% improvement in BLEU-4. User studies further indicate that its outputs surpass human-written comments in certain scenarios. The core contribution is the construction of the first LLM-driven “generation + ranking” framework for comment updating, which significantly enhances semantic fidelity and readability—particularly for complex code modifications.

Technology Category

Application Category

📝 Abstract
While comments are essential for enhancing code readability and maintainability in modern software projects, developers are often motivated to update code but not comments, leading to outdated or inconsistent documentation that hinders future understanding and maintenance. Recent approaches such as CUP and HebCup have attempted automatic comment updating using neural sequence-to-sequence models and heuristic rules, respectively. However, these methods can miss or misinterpret crucial information during comment updating, resulting in inaccurate comments, and they often struggle with complex update scenarios. Given these challenges, a promising direction lies in leveraging large language models (LLMs), which have shown impressive performance in software engineering tasks such as comment generation, code synthesis, and program repair. This suggests their strong potential to capture the logic behind code modifications - an ability that is crucial for the task of comment updating. Nevertheless, selecting an appropriate prompt strategy for an LLM on each update case remains challenging. To address this, we propose a novel comment updating framework, LLMCup, which first uses multiple prompt strategies to provide diverse candidate updated comments via an LLM, and then employs a ranking model, CupRank, to select the best candidate as final updated comment. Experimental results demonstrate the effectiveness of LLMCup, with improvements over state-of-the-art baselines (CUP and HebCup) by 49.0%-116.9% in Accuracy, 10.8%-20% in BLEU-4, 4.6% in METEOR, 0.9%-1.9% in F1, and 2.1%-3.4% in SentenceBert similarity. Furthermore, a user study shows that comments updated by LLMCup sometimes surpass human-written updates, highlighting the importance of incorporating human evaluation in comment quality assessment.
Problem

Research questions and friction points this paper is trying to address.

Automating comment updates to match code changes accurately
Overcoming limitations of existing neural and heuristic methods
Optimizing LLM prompt strategies for diverse update scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLMs for diverse comment updating candidates
Uses ranking model to select best updated comment
Improves accuracy and similarity metrics significantly
🔎 Similar Papers