ReqFusion: A Multi-Provider Framework for Automated PEGS Analysis Across Software Domains

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the heavy reliance on manual effort in software requirements engineering and the lack of efficient, cross-domain automated approaches for requirements extraction. The authors propose a multi-large language model (LLM) ensemble system based on the PEGS framework, which orchestrates models such as GPT, Claude, and Groq through a structured prompting mechanism. By integrating consensus-based decision-making and a fault-tolerant architecture, the system automatically extracts and classifies both functional and non-functional requirements from diverse document types. Evaluated on 18 real-world documents, the approach achieves an F1 score of 0.88—representing a 24% improvement over generic prompting—and demonstrates a 78% gain in analysis efficiency across 1,050 requirement instances, significantly outperforming manual methods in accuracy. The solution proves effective across academic, industrial, and tendering contexts.

Technology Category

Application Category

📝 Abstract
Requirements engineering is a vital, yet labor-intensive, stage in the software development process. This article introduces ReqFusion: an AI-enhanced system that automates the extraction, classification, and analysis of software requirements utilizing multiple Large Language Model (LLM) providers. The architecture of ReqFusion integrates OpenAI GPT, Anthropic Claude, and Groq models to extract functional and non-functional requirements from various documentation formats (PDF, DOCX, and PPTX) in academic, industrial, and tender proposal contexts. The system uses a domain-independent extraction method and generates requirements following the Project, Environment, Goal, and System (PEGS) approach introduced by Bertrand Meyer. The main idea is that, because the PEGS format is detailed, LLMs have more information and cues about the requirements, producing better results than a simple generic request. An ablation study confirms this hypothesis: PEGS-guided prompting achieves an F1 score of 0.88, compared to 0.71 for generic prompting under the same multi-provider configuration. The evaluation used 18 real-world documents to generate 226 requirements through automated classification, with 54.9% functional and 45.1% nonfunctional across academic, business, and technical domains. An extended evaluation on five projects with 1,050 requirements demonstrated significant improvements in extraction accuracy and a 78% reduction in analysis time compared to manual methods. The multi-provider architecture enhances reliability through model consensus and fallback mechanisms, while the PEGS-based approach ensures comprehensive coverage of all requirement categories.
Problem

Research questions and friction points this paper is trying to address.

Requirements Engineering
Automated Requirement Extraction
Functional and Non-functional Requirements
PEGS Analysis
Multi-domain Software Documentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReqFusion
PEGS prompting
multi-provider LLM
automated requirements engineering
domain-independent extraction
🔎 Similar Papers
No similar papers found.
M
Muhammad Khalid
Constructor University Bremen, 28759 Bremen, Germany
Manuel Oriol
Manuel Oriol
Constructor Institute of Technology; Constructor University
Software EngineeringReal-Time SystemsDynamic Software UpdatesSoftware Testing
Y
Yilmaz Uygun
Constructor University Bremen, 28759 Bremen, Germany