CreatiLayout: Siamese Multimodal Diffusion Transformer for Creative Layout-to-Image Generation

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the weak layout guidance and poor controllability in layout-to-image (L2I) generation. We propose Siamese MM-DiT, the first architecture to model layout as an independent modality on equal footing with text and image. It introduces a decoupled dual-branch interaction mechanism for joint encoding of layout, text, and image. To support training and evaluation, we construct LayoutSAM—a large-scale dataset of 2.7M layout-text-image triplets—and LayoutSAM-Eval, a comprehensive benchmark. Furthermore, we integrate a large language model–driven Layout Designer to enhance layout planning rationality. On LayoutSAM-Eval, our method significantly outperforms UNet-based baselines, achieving state-of-the-art performance in layout fidelity, content consistency, and aesthetic quality. All code, models, and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Diffusion models have been recognized for their ability to generate images that are not only visually appealing but also of high artistic quality. As a result, Layout-to-Image (L2I) generation has been proposed to leverage region-specific positions and descriptions to enable more precise and controllable generation. However, previous methods primarily focus on UNet-based models (e.g., SD1.5 and SDXL), and limited effort has explored Multimodal Diffusion Transformers (MM-DiTs), which have demonstrated powerful image generation capabilities. Enabling MM-DiT for layout-to-image generation seems straightforward but is challenging due to the complexity of how layout is introduced, integrated, and balanced among multiple modalities. To this end, we explore various network variants to efficiently incorporate layout guidance into MM-DiT, and ultimately present SiamLayout. To Inherit the advantages of MM-DiT, we use a separate set of network weights to process the layout, treating it as equally important as the image and text modalities. Meanwhile, to alleviate the competition among modalities, we decouple the image-layout interaction into a siamese branch alongside the image-text one and fuse them in the later stage. Moreover, we contribute a large-scale layout dataset, named LayoutSAM, which includes 2.7 million image-text pairs and 10.7 million entities. Each entity is annotated with a bounding box and a detailed description. We further construct the LayoutSAM-Eval benchmark as a comprehensive tool for evaluating the L2I generation quality. Finally, we introduce the Layout Designer, which taps into the potential of large language models in layout planning, transforming them into experts in layout generation and optimization. Our code, model, and dataset will be available at https://creatilayout.github.io.
Problem

Research questions and friction points this paper is trying to address.

Enhancing layout-to-image generation using Multimodal Diffusion Transformers.
Addressing modality competition by decoupling image-layout and image-text interactions.
Introducing a large-scale dataset and benchmark for layout-to-image evaluation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Siamese Multimodal Diffusion Transformer for layout-to-image generation
Separate network weights for layout, image, and text modalities
Large-scale LayoutSAM dataset with detailed annotations
🔎 Similar Papers
No similar papers found.