StyleMM: Stylized 3D Morphable Face Model via Text-Driven Aligned Image Translation

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses text-driven stylized 3D deformable face modeling: generating a stylized 3D Morphable Model (3DMM) that preserves the source identity, expression, and geometric alignment—given only a user-provided text prompt (e.g., “cartoon” or “oil painting”). Methodologically, we propose a text-guided alignment-aware image translation framework that jointly optimizes a pre-trained mesh deformation network, a texture generator, and a diffusion model to achieve disentangled control over shape, expression, and texture. Reconstruction is strictly constrained within the 3DMM parameter space to ensure topological consistency, vertex connectivity preservation, and animation readiness. Compared to state-of-the-art methods, our approach significantly improves identity fidelity and stylistic diversity, enabling feed-forward generation of high-quality, editable, and riggable stylized 3D face meshes.

Technology Category

Application Category

📝 Abstract
We introduce StyleMM, a novel framework that can construct a stylized 3D Morphable Model (3DMM) based on user-defined text descriptions specifying a target style. Building upon a pre-trained mesh deformation network and a texture generator for original 3DMM-based realistic human faces, our approach fine-tunes these models using stylized facial images generated via text-guided image-to-image (i2i) translation with a diffusion model, which serve as stylization targets for the rendered mesh. To prevent undesired changes in identity, facial alignment, or expressions during i2i translation, we introduce a stylization method that explicitly preserves the facial attributes of the source image. By maintaining these critical attributes during image stylization, the proposed approach ensures consistent 3D style transfer across the 3DMM parameter space through image-based training. Once trained, StyleMM enables feed-forward generation of stylized face meshes with explicit control over shape, expression, and texture parameters, producing meshes with consistent vertex connectivity and animatability. Quantitative and qualitative evaluations demonstrate that our approach outperforms state-of-the-art methods in terms of identity-level facial diversity and stylization capability. The code and videos are available at [kwanyun.github.io/stylemm_page](kwanyun.github.io/stylemm_page).
Problem

Research questions and friction points this paper is trying to address.

Construct stylized 3D face models from text descriptions
Preserve facial attributes during text-guided image stylization
Ensure consistent 3D style transfer across parameter space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-driven aligned image translation for 3DMM
Diffusion model fine-tuning for stylized targets
Attribute-preserving stylization for consistent 3D transfer
🔎 Similar Papers
No similar papers found.
S
Seungmi Lee
KAIST, Visual Media Lab
K
Kwan Yun
KAIST, Visual Media Lab
Junyong Noh
Junyong Noh
KAIST
Facial/Character AnimationVR/ARImmersive Display