Detecting Multiple Semantic Concerns in Tangled Code Commits

πŸ“… 2026-01-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of detecting entangled semantic concerns in code commitsβ€”a common issue that obscures developer intent and hampers maintainability, yet remains poorly handled by existing approaches. We formulate multi-concern detection as a multi-label classification problem and introduce a novel dataset of synthetically entangled commits derived from real-world data. Systematic evaluation of small language models (SLMs) under strict token budgets reveals that fine-tuned SLMs with 14B parameters achieve competitive performance: they match state-of-the-art large models on single-concern tasks and retain practical efficacy with up to three concurrent concerns. Our approach integrates commit messages with a head-preserving truncation strategy and tailored fine-tuning, reducing Hamming Loss by up to 44% without introducing significant latency, thereby substantially improving detection accuracy.

Technology Category

Application Category

πŸ“ Abstract
Code commits in a version control system (e.g., Git) should be atomic, i.e., focused on a single goal, such as adding a feature or fixing a bug. In practice, however, developers often bundle multiple concerns into tangled commits, obscuring intent and complicating maintenance. Recent studies have used Conventional Commits Specification (CCS) and Language Models (LMs) to capture commit intent, demonstrating that Small Language Models (SLMs) can approach the performance of Large Language Models (LLMs) while maintaining efficiency and privacy. However, they do not address tangled commits involving multiple concerns, leaving the feasibility of using LMs for multi-concern detection unresolved. In this paper, we frame multi-concern detection in tangled commits as a multi-label classification problem and construct a controlled dataset of artificially tangled commits based on real-world data. We then present an empirical study using SLMs to detect multiple semantic concerns in tangled commits, examining the effects of fine-tuning, concern count, commit-message inclusion, and header-preserving truncation under practical token-budget limits. Our results show that a fine-tuned 14B-parameter SLM is competitive with a state-of-the-art LLM for single-concern commits and remains usable for up to three concerns. In particular, including commit messages improves detection accuracy by up to 44% (in terms of Hamming Loss) with negligible latency overhead, establishing them as important semantic cues.
Problem

Research questions and friction points this paper is trying to address.

tangled commits
semantic concerns
multi-concern detection
code commits
version control
Innovation

Methods, ideas, or system contributions that make the work stand out.

tangled commits
multi-label classification
small language models
commit message semantics
multi-concern detection
πŸ”Ž Similar Papers
No similar papers found.