🤖 AI Summary
This work introduces the first unified diffusion model framework for jointly addressing multi-image generation and editing tasks: text-to-RGBX generation, RGB-to-intrinsic decomposition (e.g., depth, normals, albedo), intrinsic-conditioned RGBX generation, and global/local editing. Methodologically, building upon a pre-trained text-to-image diffusion model, we design an X-layer joint output mechanism and a multi-condition fine-tuning strategy—incorporating both text and selected intrinsic layers—to enable end-to-end, single-model co-modeling. Our key contribution is the first realization of consistent, joint generation of RGB and high-dimensional intrinsic maps, unifying decomposition, conditional generation, and editing within a single architecture—departing from conventional sequential or multi-model paradigms. Experiments demonstrate state-of-the-art performance on intrinsic decomposition and intrinsic-conditioned generation, while fully preserving the base model’s text-to-image capability. The framework achieves significantly improved cross-task generalization and generation fidelity.
📝 Abstract
We present PRISM, a unified framework that enables multiple image generation and editing tasks in a single foundational model. Starting from a pre-trained text-to-image diffusion model, PRISM proposes an effective fine-tuning strategy to produce RGB images along with intrinsic maps (referred to as X layers) simultaneously. Unlike previous approaches, which infer intrinsic properties individually or require separate models for decomposition and conditional generation, PRISM maintains consistency across modalities by generating all intrinsic layers jointly. It supports diverse tasks, including text-to-RGBX generation, RGB-to-X decomposition, and X-to-RGBX conditional generation. Additionally, PRISM enables both global and local image editing through conditioning on selected intrinsic layers and text prompts. Extensive experiments demonstrate the competitive performance of PRISM both for intrinsic image decomposition and conditional image generation while preserving the base model's text-to-image generation capability.