🤖 AI Summary
To address texture fragmentation, semantic misalignment, and poor aesthetic adaptability of 3D Gaussian Splatting (3DGS) in stylized scenes (e.g., cartoons, games), this paper proposes the first end-to-end style transfer framework tailored for 3D Gaussian representations. Methodologically, it introduces dynamic style score distillation, contrastive style descriptors, cooperative multi-scale optimization, and a differentiable 3D Gaussian quality evaluator—integrated with Stable Diffusion latent-space guidance, multi-encoder semantic alignment, and contrastive learning-based texture modeling. It achieves, for the first time, multi-granularity semantic disentanglement and human aesthetic prior-driven 3D stylization. Evaluated on NeRF-synthesized objects and real-world TandT scenes, our method significantly improves geometric detail fidelity (e.g., sculptural textures) and cross-regional style consistency (e.g., global illumination), while enabling real-time rendering.
📝 Abstract
3D Gaussian Splatting (3DGS) excels in photorealistic scene reconstruction but struggles with stylized scenarios (e.g., cartoons, games) due to fragmented textures, semantic misalignment, and limited adaptability to abstract aesthetics. We propose StyleMe3D, a holistic framework for 3D GS style transfer that integrates multi-modal style conditioning, multi-level semantic alignment, and perceptual quality enhancement. Our key insights include: (1) optimizing only RGB attributes preserves geometric integrity during stylization; (2) disentangling low-, medium-, and high-level semantics is critical for coherent style transfer; (3) scalability across isolated objects and complex scenes is essential for practical deployment. StyleMe3D introduces four novel components: Dynamic Style Score Distillation (DSSD), leveraging Stable Diffusion's latent space for semantic alignment; Contrastive Style Descriptor (CSD) for localized, content-aware texture transfer; Simultaneously Optimized Scale (SOS) to decouple style details and structural coherence; and 3D Gaussian Quality Assessment (3DG-QA), a differentiable aesthetic prior trained on human-rated data to suppress artifacts and enhance visual harmony. Evaluated on NeRF synthetic dataset (objects) and tandt db (scenes) datasets, StyleMe3D outperforms state-of-the-art methods in preserving geometric details (e.g., carvings on sculptures) and ensuring stylistic consistency across scenes (e.g., coherent lighting in landscapes), while maintaining real-time rendering. This work bridges photorealistic 3D GS and artistic stylization, unlocking applications in gaming, virtual worlds, and digital art.