🤖 AI Summary
Existing software modeling datasets are often ad hoc constructions lacking rigorous quality assurance, leading to research findings that are difficult to reproduce, compare, and prone to bias. This work proposes the first benchmarking framework specifically designed for model-driven engineering, treating datasets themselves as first-class evaluation targets. By defining clear metrics for quality, representativeness, and task suitability, the framework establishes a unified platform that enables automated analysis of modeling datasets across multiple languages and formats. For the first time, this approach facilitates systematic evaluation of modeling datasets, substantially enhancing the reproducibility, fairness, and scientific rigor of research in the field.
📝 Abstract
Empirical and LLM-based research in model-driven engineering increasingly relies on datasets of software models, for instance, to train or evaluate machine learning techniques for modeling support. These datasets have a significant impact on solution performance; hence, they should be treated and assessed as first-class artifacts. However, such datasets are typically collected or created ad hoc and without guarantees of their quality for the specific task for which they are used. This limits the comparability of results between studies, obscures dataset quality and representativeness, and leads to weak reproducibility and potential bias. In this work, we propose a benchmarking framework for model datasets (i.e., benchmarking the dataset itself). Benchmarking datasets involves systematically measuring their quality, representativeness, and suitability for specific tasks. To this end, we propose a Benchmark Platform for MDE that provides a unified infrastructure for systematically assessing and comparing datasets of software models across languages and formats, using defined criteria and metrics.