🤖 AI Summary
Current multimodal generative models face two key bottlenecks in design assistance: insufficient comprehension of ambiguous instructions and difficulty maintaining both content consistency and creativity under reference guidance. To address these, we propose WeGen—the first unified architecture enabling bidirectional generation-understanding co-evolution. It integrates dynamical alignment via interleaved sequence modeling, consistency-aware generation, and prompt self-rewriting to support interactive, iterative multimodal creation. Built upon multimodal sequence modeling, WeGen leverages foundation-model–self-annotated dynamical datasets and interleaved object-dynamics representations, enabling controllable refinement while preserving user-satisfying content. Experiments demonstrate that WeGen achieves state-of-the-art performance on visual generation benchmarks, significantly improving creativity, reference fidelity, and user controllability—validating its effectiveness as an efficient, intuitive design collaborator.
📝 Abstract
Existing multimodal generative models fall short as qualified design copilots, as they often struggle to generate imaginative outputs once instructions are less detailed or lack the ability to maintain consistency with the provided references. In this work, we introduce WeGen, a model that unifies multimodal generation and understanding, and promotes their interplay in iterative generation. It can generate diverse results with high creativity for less detailed instructions. And it can progressively refine prior generation results or integrating specific contents from references following the instructions in its chat with users. During this process, it is capable of preserving consistency in the parts that the user is already satisfied with. To this end, we curate a large-scale dataset, extracted from Internet videos, containing rich object dynamics and auto-labeled dynamics descriptions by advanced foundation models to date. These two information are interleaved into a single sequence to enable WeGen to learn consistency-aware generation where the specified dynamics are generated while the consistency of unspecified content is preserved aligned with instructions. Besides, we introduce a prompt self-rewriting mechanism to enhance generation diversity. Extensive experiments demonstrate the effectiveness of unifying multimodal understanding and generation in WeGen and show it achieves state-of-the-art performance across various visual generation benchmarks. These also demonstrate the potential of WeGen as a user-friendly design copilot as desired. The code and models will be available at https://github.com/hzphzp/WeGen.