🤖 AI Summary
To address data scarcity, fragmented workflows, and insufficient evaluation in domain-adaptive question answering (QA) model development, this paper introduces the first integrated QA data generation–fine-tuning–evaluation closed-loop platform. Leveraging large language models (LLMs), the platform enables context-aware, adaptive QA pair synthesis; supports interactive dataset browsing and model exploration; and provides multi-dimensional evaluation metric visualization alongside cross-model performance benchmarking. Its key contributions are: (1) the first end-to-end, auditable closed-loop framework for domain QA; (2) tight integration of data quality assessment with model behavior analysis; and (3) support for local deployment and full workflow reproducibility. Experimental results demonstrate that the platform significantly improves both development efficiency and interpretability of domain-specific QA models. The source code will be publicly released.
📝 Abstract
We present QGen Studio: an adaptive question-answer generation, training, and evaluation platform. QGen Studio enables users to leverage large language models (LLMs) to create custom question-answer datasets and fine-tune models on this synthetic data. It features a dataset viewer and model explorer to streamline this process. The dataset viewer provides key metrics and visualizes the context from which the QA pairs are generated, offering insights into data quality. The model explorer supports model comparison, allowing users to contrast the performance of their trained LLMs against other models, supporting performance benchmarking and refinement. QGen Studio delivers an interactive, end-to-end solution for generating QA datasets and training scalable, domain-adaptable models. The studio will be open-sourced soon, allowing users to deploy it locally.