Larger Is Not Always Better: Leveraging Structured Code Diffs for Comment Inconsistency Detection

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses semantic inconsistency between code and comments arising from unsynchronized comment updates after code modifications. We propose a lightweight, privacy-preserving, just-in-time detection method. Our approach innovatively introduces structured code differencing—modeling fine-grained edit operations (addition, deletion, modification) as sequences—into comment inconsistency detection, eliminating reliance on large language models (LLMs). Built upon the CodeT5+ architecture, it jointly models change activity sequences and employs contrastive learning, requiring no LLM fine-tuning. The method inherently captures evolutionary code history while imposing minimal computational overhead. Evaluated on JITDATA and CCIBENCH benchmarks, it achieves absolute F1-score improvements of 13.54% and 4.18–10.94%, respectively, over state-of-the-art fine-tuned code models (e.g., DeepSeek-Coder), significantly outperforming existing approaches.

Technology Category

Application Category

📝 Abstract
Ensuring semantic consistency between source code and its accompanying comments is crucial for program comprehension, effective debugging, and long-term maintainability. Comment inconsistency arises when developers modify code but neglect to update the corresponding comments, potentially misleading future maintainers and introducing errors. Recent approaches to code-comment inconsistency (CCI) detection leverage Large Language Models (LLMs) and rely on capturing the semantic relationship between code changes and outdated comments. However, they often ignore the structural complexity of code evolution, including historical change activities, and introduce privacy and resource challenges. In this paper, we propose a Just-In-Time CCI detection approach built upon the CodeT5+ backbone. Our method decomposes code changes into ordered sequences of modification activities such as replacing, deleting, and adding to more effectively capture the correlation between these changes and the corresponding outdated comments. Extensive experiments conducted on publicly available benchmark datasets-JITDATA and CCIBENCH--demonstrate that our proposed approach outperforms recent state-of-the-art models by up to 13.54% in F1-Score and achieves an improvement ranging from 4.18% to 10.94% over fine-tuned LLMs including DeepSeek-Coder, CodeLlama and Qwen2.5-Coder.
Problem

Research questions and friction points this paper is trying to address.

Detects inconsistencies between code changes and outdated comments
Addresses limitations of LLMs in capturing structural code evolution
Improves detection accuracy over existing models and fine-tuned LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses CodeT5+ backbone for just-in-time comment inconsistency detection
Decomposes code changes into ordered sequences of modification activities
Captures correlation between structural code evolution and outdated comments
🔎 Similar Papers
No similar papers found.
Phong Nguyen
Phong Nguyen
NCS
tài chínhquản trị
A
Anh M. T. Bui
Hanoi University of Science and Technology, Hanoi, Vietnam
P
Phuong T. Nguyen
University of L’Aquila, L’Aquila, Italy