🤖 AI Summary
This work proposes a novel approach to address the challenge novice modelers often face in ensuring semantic alignment between domain models and textual specifications during early software engineering phases. The method first employs natural language processing to preprocess specification texts and generates human-authored natural language descriptions for each model element. It then leverages a large language model (LLM) to compare these descriptions against the original specifications, automatically classifying their alignment status as aligned, misaligned, or uncertain, while providing interpretable evidence for each judgment. By uniquely integrating LLM capabilities with human-crafted model descriptions, the approach achieves high-precision semantic alignment verification, demonstrating near-perfect precision (≈100%) and 78% recall across multiple domain datasets. Individual element analysis requires between 18 seconds and one minute, indicating strong potential for integration into modeling tools.
📝 Abstract
Context: Having domain models derived from textual specifications has proven to be very useful in the early phases of software engineering. However, creating correct domain models and establishing clear links with the textual specification is a challenging task, especially for novice modelers. Objectives: We propose an approach for determining the alignment between a partial domain model and a textual specification. Methods: To this aim, we use Natural Language Processing techniques to pre-process the text, generate an artificial natural language specification for each model element, and then use an LLM to compare the generated description with matched sentences from the original specification. Ultimately, our algorithm classifies each model element as either aligned (i.e., correct), misaligned (i.e., incorrect), or unclassified (i.e., insufficient evidence). Furthermore, it outputs the related sentences from the textual specification that provide the evidence for the determined class. Results: We have evaluated our approach on a set of examples from the literature containing diverse domains, each consisting of a textual specification and a reference domain model, as well as on models containing modeling errors that were systematically derived from the correct models through mutation. Our results show that we are able to identify alignments and misalignments with a precision close to 1 and a recall of approximately 78%, with execution times ranging from 18 seconds to 1 minute per model element. Conclusion: Since our algorithm almost never classifies model elements incorrectly, and is able to classify over 3/4 of the model elements, it could be integrated into a modeling tool to provide positive feedback or generate warnings, or employed for offline validation and quality assessment.