🤖 AI Summary
Existing automated graphic design methods face two key bottlenecks: traditional two-stage pipelines lack intelligence and creativity, while diffusion-based approaches generate only non-editable pixel-level images with blurry text rendering and limited practicality. This paper proposes the first natural language–driven, editable multimodal layer generation framework, integrating multimodal large language models (MLLMs) and diffusion models in an end-to-end jointly trained architecture. We introduce a novel paradigm of parameterized rendering coupled with image asset co-generation: the MLLM parses user instructions to predict layer attributes and layout structure, while the diffusion model synthesizes high-fidelity visual content. Experiments across diverse design scenarios demonstrate significant improvements over state-of-the-art methods, enabling efficient generation of high-fidelity, semantically aligned, fully editable vector and layered design files—effectively bridging creative flexibility and engineering practicality.
📝 Abstract
Graphic design visually conveys information and data by creating and combining text, images and graphics. Two-stage methods that rely primarily on layout generation lack creativity and intelligence, making graphic design still labor-intensive. Existing diffusion-based methods generate non-editable graphic design files at image level with poor legibility in visual text rendering, which prevents them from achieving satisfactory and practical automated graphic design. In this paper, we propose Instructional Graphic Designer (IGD) to swiftly generate multimodal layers with editable flexibility with only natural language instructions. IGD adopts a new paradigm that leverages parametric rendering and image asset generation. First, we develop a design platform and establish a standardized format for multi-scenario design files, thus laying the foundation for scaling up data. Second, IGD utilizes the multimodal understanding and reasoning capabilities of MLLM to accomplish attribute prediction, sequencing and layout of layers. It also employs a diffusion model to generate image content for assets. By enabling end-to-end training, IGD architecturally supports scalability and extensibility in complex graphic design tasks. The superior experimental results demonstrate that IGD offers a new solution for graphic design.