🤖 AI Summary
This work addresses text-driven stylized 3D deformable face modeling: generating a stylized 3D Morphable Model (3DMM) that preserves the source identity, expression, and geometric alignment—given only a user-provided text prompt (e.g., “cartoon” or “oil painting”). Methodologically, we propose a text-guided alignment-aware image translation framework that jointly optimizes a pre-trained mesh deformation network, a texture generator, and a diffusion model to achieve disentangled control over shape, expression, and texture. Reconstruction is strictly constrained within the 3DMM parameter space to ensure topological consistency, vertex connectivity preservation, and animation readiness. Compared to state-of-the-art methods, our approach significantly improves identity fidelity and stylistic diversity, enabling feed-forward generation of high-quality, editable, and riggable stylized 3D face meshes.
📝 Abstract
We introduce StyleMM, a novel framework that can construct a stylized 3D Morphable Model (3DMM) based on user-defined text descriptions specifying a target style. Building upon a pre-trained mesh deformation network and a texture generator for original 3DMM-based realistic human faces, our approach fine-tunes these models using stylized facial images generated via text-guided image-to-image (i2i) translation with a diffusion model, which serve as stylization targets for the rendered mesh. To prevent undesired changes in identity, facial alignment, or expressions during i2i translation, we introduce a stylization method that explicitly preserves the facial attributes of the source image. By maintaining these critical attributes during image stylization, the proposed approach ensures consistent 3D style transfer across the 3DMM parameter space through image-based training. Once trained, StyleMM enables feed-forward generation of stylized face meshes with explicit control over shape, expression, and texture parameters, producing meshes with consistent vertex connectivity and animatability. Quantitative and qualitative evaluations demonstrate that our approach outperforms state-of-the-art methods in terms of identity-level facial diversity and stylization capability. The code and videos are available at [kwanyun.github.io/stylemm_page](kwanyun.github.io/stylemm_page).