Evaluating Software Process Models for Multi-Agent Class-Level Code Generation

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work on LLM-based code generation predominantly focuses on single-agent, function-level synthesis, neglecting the impact of software process structure and role specialization on class-level generation. Method: We propose a multi-agent workflow grounded in the waterfall development lifecycle (requirements → design → implementation → testing), employing GPT-4o-mini, DeepSeek-Chat, and Claude-3.5-Haiku to generate solutions for 100 Python tasks from the ClassEval benchmark. Contribution/Results: Process constraints significantly improve code maintainability but introduce novel semantic errors—revealing a fundamental trade-off between rigid procedural discipline and flexible reasoning. Multi-agent collaboration yields non-uniform performance: Claude-3.5-Haiku achieves a 9.5% gain in functional correctness, whereas other models suffer ~40% degradation. Crucially, the testing phase exerts the strongest influence on verification coverage. This study provides the first systematic evidence that software process architecture fundamentally shapes LLMs’ collaborative reasoning patterns in class-level code generation.

Technology Category

Application Category

📝 Abstract
Modern software systems require code that is not only functional but also maintainable and well-structured. Although Large Language Models (LLMs) are increasingly used to automate software development, most studies focus on isolated, single-agent function-level generation. This work examines how process structure and role specialization shape multi-agent LLM workflows for class-level code generation. We simulate a Waterfall-style development cycle covering Requirement, Design, Implementation, and Testing using three LLMs (GPT-4o-mini, DeepSeek-Chat, and Claude-3.5-Haiku) on 100 Python tasks from the ClassEval benchmark. Our findings show that multi-agent workflows reorganize, rather than consistently enhance, model performance. Waterfall-style collaboration produces cleaner and more maintainable code but often reduces functional correctness (-37.8% for GPT-4o-mini and -39.8% for DeepSeek-Chat), with Claude-3.5-Haiku as a notable exception (+9.5%). Importantly, process constraints shift failure characteristics: structural issues such as missing code decrease, while semantic and validation errors become more frequent. Among all stages, Testing exerts the strongest influence by improving verification coverage but also introducing new reasoning failures, whereas Requirement and Design have comparatively modest effects. Overall, this study provides empirical evidence that software process structure fundamentally alters how LLMs reason, collaborate, and fail, revealing inherent trade-offs between rigid workflow discipline and flexible problem-solving in multi-agent code generation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multi-agent LLM workflows for class-level code generation
Assessing how process structure affects code quality and correctness
Analyzing trade-offs between workflow discipline and problem-solving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent workflows simulate Waterfall development cycle
Process structure shifts failure characteristics in code generation
Testing stage exerts strongest influence on model performance
W
Wasique Islam Shafin
SPEAR Lab, Concordia University, Montreal, QC, Canada
M
Md Nakhla Rafi
SPEAR Lab, Concordia University, Montreal, QC, Canada
Z
Zhenhao Li
York University, Toronto, Ontario, Canada
Tse-Hsun (Peter) Chen
Tse-Hsun (Peter) Chen
SPEAR Lab, Associate Professor in Computer Science, Concordia University, Montreal, Canada
LLM4SELog AnalysisAutomated DebuggingAIOpsSoftware Performance Engineering