FairyGen: Storied Cartoon Video from a Single Child-Drawn Character

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of automatically converting children’s hand-drawn characters into stylistically consistent and narratively coherent cartoon animation videos—a task hindered by low hand-drawing style fidelity and disconnection between motion and narrative. We propose a three-stage framework: (1) a novel Style Propagation Adapter enabling cross-scene transfer of hand-drawn aesthetics; (2) a two-stage Motion Customization Adapter that decouples identity representation from dynamic motion modeling; and (3) an integrated pipeline combining MLLM-driven storyboard generation, 3D character agent reconstruction, and MMDiT-based video diffusion for cinematic shot composition and temporally natural motion synthesis. Experiments demonstrate significant improvements over state-of-the-art methods in style consistency, narrative coherence, and motion fluency. To our knowledge, this is the first approach to achieve end-to-end generation of high-quality narrative animation videos from a single children’s sketch.

Technology Category

Application Category

📝 Abstract
We propose FairyGen, an automatic system for generating story-driven cartoon videos from a single child's drawing, while faithfully preserving its unique artistic style. Unlike previous storytelling methods that primarily focus on character consistency and basic motion, FairyGen explicitly disentangles character modeling from stylized background generation and incorporates cinematic shot design to support expressive and coherent storytelling. Given a single character sketch, we first employ an MLLM to generate a structured storyboard with shot-level descriptions that specify environment settings, character actions, and camera perspectives. To ensure visual consistency, we introduce a style propagation adapter that captures the character's visual style and applies it to the background, faithfully retaining the character's full visual identity while synthesizing style-consistent scenes. A shot design module further enhances visual diversity and cinematic quality through frame cropping and multi-view synthesis based on the storyboard. To animate the story, we reconstruct a 3D proxy of the character to derive physically plausible motion sequences, which are then used to fine-tune an MMDiT-based image-to-video diffusion model. We further propose a two-stage motion customization adapter: the first stage learns appearance features from temporally unordered frames, disentangling identity from motion; the second stage models temporal dynamics using a timestep-shift strategy with frozen identity weights. Once trained, FairyGen directly renders diverse and coherent video scenes aligned with the storyboard. Extensive experiments demonstrate that our system produces animations that are stylistically faithful, narratively structured natural motion, highlighting its potential for personalized and engaging story animation. The code will be available at https://github.com/GVCLab/FairyGen
Problem

Research questions and friction points this paper is trying to address.

Generate story-driven cartoon videos from child drawings
Disentangle character modeling from stylized background generation
Ensure visual consistency and cinematic quality in animations
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLLM generates structured storyboard with shot details
Style propagation adapter ensures visual consistency
Two-stage motion customization disentangles identity and motion
🔎 Similar Papers
No similar papers found.