🤖 AI Summary
To address the absence or obsolescence of user stories in legacy systems, this paper investigates a reverse-engineering approach for automatically recovering user stories from source code, with a focus on the feasibility of large language models (LLMs) and the critical role of prompt engineering. We conduct a systematic evaluation using five state-of-the-art LLMs (8B–70B parameters) and six prompt strategies on a dataset of 1,750 annotated C++ code snippets. Results demonstrate that even an 8B-parameter model achieves performance comparable to a 70B-parameter model when provided with only a single in-context example—highlighting the substantial gains enabled by effective prompting. Moreover, all models attain an average F1 score of 0.80 on code snippets with ≤200 non-commented logical lines of code (NLOC), confirming both high accuracy and practical applicability for real-world legacy system documentation.
📝 Abstract
User stories are essential in agile development, yet often missing or outdated in legacy and poorly documented systems. We investigate whether large language models (LLMs) can automatically recover user stories directly from source code and how prompt design impacts output quality. Using 1,750 annotated C++ snippets of varying complexity, we evaluate five state-of-the-art LLMs across six prompting strategies. Results show that all models achieve, on average, an F1 score of 0.8 for code up to 200 NLOC. Our findings show that a single illustrative example enables the smallest model (8B) to match the performance of a much larger 70B model. In contrast, structured reasoning via Chain-of-Thought offers only marginal gains, primarily for larger models.