QGen Studio: An Adaptive Question-Answer Generation, Training and Evaluation Platform

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data scarcity, fragmented workflows, and insufficient evaluation in domain-adaptive question answering (QA) model development, this paper introduces the first integrated QA data generation–fine-tuning–evaluation closed-loop platform. Leveraging large language models (LLMs), the platform enables context-aware, adaptive QA pair synthesis; supports interactive dataset browsing and model exploration; and provides multi-dimensional evaluation metric visualization alongside cross-model performance benchmarking. Its key contributions are: (1) the first end-to-end, auditable closed-loop framework for domain QA; (2) tight integration of data quality assessment with model behavior analysis; and (3) support for local deployment and full workflow reproducibility. Experimental results demonstrate that the platform significantly improves both development efficiency and interpretability of domain-specific QA models. The source code will be publicly released.

Technology Category

Application Category

📝 Abstract
We present QGen Studio: an adaptive question-answer generation, training, and evaluation platform. QGen Studio enables users to leverage large language models (LLMs) to create custom question-answer datasets and fine-tune models on this synthetic data. It features a dataset viewer and model explorer to streamline this process. The dataset viewer provides key metrics and visualizes the context from which the QA pairs are generated, offering insights into data quality. The model explorer supports model comparison, allowing users to contrast the performance of their trained LLMs against other models, supporting performance benchmarking and refinement. QGen Studio delivers an interactive, end-to-end solution for generating QA datasets and training scalable, domain-adaptable models. The studio will be open-sourced soon, allowing users to deploy it locally.
Problem

Research questions and friction points this paper is trying to address.

Enables custom QA dataset creation using LLMs
Facilitates model fine-tuning on synthetic data
Supports model comparison and performance benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLMs for custom QA dataset creation
Features dataset viewer for quality insights
Supports model comparison and performance benchmarking
🔎 Similar Papers
No similar papers found.
M
Movina Moses
IBM Research
Mohab Elkaref
Mohab Elkaref
Research Scientist, IBM Research UK
Natural Language ProcessingDeep Learning
James Barry
James Barry
IBM Research
Natural Language Processing
S
Shinnosuke Tanaka
IBM Research Europe
V
Vishnudev Kuruvanthodi
IBM Research Europe
N
Nathan Herr
University College London
Campbell D Watson
Campbell D Watson
IBM Research
climatesustainabilityai
G
Geeth De Mel
IBM Research Europe