🤖 AI Summary
This paper addresses the high technical barrier and limited creative flexibility in 2D character animation production by proposing an end-to-end AI-assisted authoring framework. Methodologically, it integrates natural language understanding, multimodal diffusion generation, and motion modeling to enable one-click generation of personalized characters from textual descriptions. It introduces a novel hierarchical clothing-aware rigging mechanism and a dynamic mesh-skeleton co-optimization algorithm, while leveraging BVH representations and motion diffusion models to support real-time animation synthesis and cross-character motion transfer. Experiments demonstrate that the system substantially lowers production barriers, delivers high-fidelity outputs with real-time responsiveness, and significantly improves animation asset reusability. The core contribution is the first holistic 2D animation AI generation paradigm that is semantics-driven, generation-controllable, rigging-adaptive, and motion-transferable.
📝 Abstract
This research presents Spiritus, an AI-assisted creation tool designed to streamline 2D character animation creation while enhancing creative flexibility. By integrating natural language processing and diffusion models, users can efficiently transform natural language descriptions into personalized 2D characters and animations. The system employs automated segmentation, layered costume techniques, and dynamic mesh-skeleton binding solutions to support flexible adaptation of complex costumes and additional components. Spiritus further achieves real-time animation generation and efficient animation resource reuse between characters through the integration of BVH data and motion diffusion models. Experimental results demonstrate Spiritus's effectiveness in reducing technical barriers, enhancing creative freedom, and supporting resource universality. Future work will focus on optimizing user experience and further exploring the system's human-computer collaboration potential.