Design2GarmentCode: Turning Design Concepts to Tangible Garments Through Program Synthesis

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing sewing pattern generation methods struggle to map the multimodal nature of design concepts onto the geometric precision required for garment construction. This paper introduces the first program-synthesis-driven multimodal sewing pattern generation framework, decoupling large language models’ semantic understanding from structured cutting logic to enable cross-domain knowledge transfer and semantics-controllable generation. Methodologically, it integrates multimodal prompt engineering, parametric pattern programming language modeling, multimodal alignment distillation, and geometry-constraint-aware decoding. Experiments demonstrate a 3.2× improvement in training efficiency and significant gains in generation quality—measured by BLEU-4 and Geometric F1—over state-of-the-art approaches. The framework supports heterogeneous inputs (sketches, images, text), producing vectorized, dimensionally accurate patterns with correct seam relationships. It further enables interactive editing and batch customization, advancing practical applicability in digital fashion design.

Technology Category

Application Category

📝 Abstract
Sewing patterns, the essential blueprints for fabric cutting and tailoring, act as a crucial bridge between design concepts and producible garments. However, existing uni-modal sewing pattern generation models struggle to effectively encode complex design concepts with a multi-modal nature and correlate them with vectorized sewing patterns that possess precise geometric structures and intricate sewing relations. In this work, we propose a novel sewing pattern generation approach extbf{Design2GarmentCode} based on Large Multimodal Models (LMMs), to generate parametric pattern-making programs from multi-modal design concepts. LMM offers an intuitive interface for interpreting diverse design inputs, while pattern-making programs could serve as well-structured and semantically meaningful representations of sewing patterns, and act as a robust bridge connecting the cross-domain pattern-making knowledge embedded in LMMs with vectorized sewing patterns. Experimental results demonstrate that our method can flexibly handle various complex design expressions such as images, textual descriptions, designer sketches, or their combinations, and convert them into size-precise sewing patterns with correct stitches. Compared to previous methods, our approach significantly enhances training efficiency, generation quality, and authoring flexibility.
Problem

Research questions and friction points this paper is trying to address.

Encoding multi-modal design concepts into sewing patterns
Generating precise geometric and sewing-relation-aware vectorized patterns
Bridging cross-domain knowledge from LMMs to tangible garments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Large Multimodal Models for pattern generation
Converts multi-modal designs into parametric programs
Enhances precision and flexibility in garment creation
🔎 Similar Papers
No similar papers found.