🤖 AI Summary
This work proposes a structured prompting framework leveraging large language models (LLMs) to reduce the human effort and complexity inherent in Domain-Driven Design (DDD) implementation. The approach decomposes the DDD process into five sequential steps: event storming simulation, bounded context identification, aggregate design, glossary generation, and technical architecture mapping. It represents the first systematic application of prompt engineering across the entire DDD workflow, positioning the LLM as an expert collaborator rather than a replacement for human designers. Experimental results demonstrate that the first three steps effectively produce high-quality design artifacts—such as domain glossaries and context maps—whereas the latter two steps suffer from error propagation, thereby underscoring the necessity of human-in-the-loop collaboration for critical design decisions.
📝 Abstract
Domain-driven design (DDD) is a powerful design technique for architecting complex software systems. This paper introduces a prompting framework that automates core DDD activities through structured large language model (LLM) interactions. We decompose DDD into five sequential steps: (1) establishing an ubiquitous language, (2) simulating event storming, (3) identifying bounded contexts, (4) designing aggregates, and (5) mapping to technical architecture. In a case study, we validated the prompting framework against real-world requirements from FTAPI's enterprise platform. While the first steps consistently generate valuable and usable artifacts, later steps show how minor errors or inaccuracies can propagate and accumulate. Overall, the framework excels as a collaborative sparring partner for building actionable documentation, such as glossaries and context maps, but not for full automation. This allows the experts to concentrate their discussion on the critical trade-offs. In our evaluation, Steps 1 to 3 worked well, but the accumulated errors rendered the artifacts generated from Steps 4 and 5 impractical. Our findings show that LLMs can enhance, but not replace, architectural expertise, offering a practical tool to reduce the effort and overhead of DDD while preserving human-centric decision-making.