🤖 AI Summary
Traditional reference-based automatic metrics (e.g., BLEU, ROUGE-L) suffer from weak semantic modeling capability and low correlation with human judgments in commit message quality assessment. To address this, we propose the first large language model (LLM)-based automated evaluation framework for commit messages. Our method integrates chain-of-thought reasoning, few-shot prompting, and multi-strategy prompt engineering—requiring no fine-tuning—to achieve fine-grained semantic understanding. Experiments demonstrate that our approach significantly outperforms conventional metrics in accuracy, consistency, and robustness. It achieves strong agreement with human evaluations across key dimensions—including functional completeness, conciseness, and readability—with Pearson correlation coefficients exceeding 0.85. Moreover, the framework exhibits high reproducibility and fairness. By eliminating the need for training and leveraging LLMs’ inherent linguistic capabilities, our method establishes an efficient, scalable, and semantically grounded paradigm for commit message assessment.
📝 Abstract
Commit messages are essential in software development as they serve to document and explain code changes. Yet, their quality often falls short in practice, with studies showing significant proportions of empty or inadequate messages. While automated commit message generation has advanced significantly, particularly with Large Language Models (LLMs), the evaluation of generated messages remains challenging. Traditional reference-based automatic metrics like BLEU, ROUGE-L, and METEOR have notable limitations in assessing commit message quality, as they assume a one-to-one mapping between code changes and commit messages, leading researchers to rely on resource-intensive human evaluation. This study investigates the potential of LLMs as automated evaluators for commit message quality. Through systematic experimentation with various prompt strategies and state-of-the-art LLMs, we demonstrate that LLMs combining Chain-of-Thought reasoning with few-shot demonstrations achieve near human-level evaluation proficiency. Our LLM-based evaluator significantly outperforms traditional metrics while maintaining acceptable reproducibility, robustness, and fairness levels despite some inherent variability. This work conducts a comprehensive preliminary study on using LLMs for commit message evaluation, offering a scalable alternative to human assessment while maintaining high-quality evaluation.