🤖 AI Summary
This work addresses the challenge of transferring human facial expressions to 3D animal face models without relying on any animal expression data. To this end, the authors propose a disentangled latent embedding space that separates identity and expression. The identity subspace is constructed using species-agnostic geometric descriptors such as Heat Kernel Signature (HKS) and Wave Kernel Signature (WKS), while the expression subspace leverages mesh-independent embeddings trained with Jacobian, vertex position, and Laplacian losses to enable cross-species generalization. Trained exclusively on human expression data, the method achieves zero-shot human-to-animal 3D facial expression transfer, generating realistic animal expressions without any animal-specific annotations, thereby effectively bridging the expressive gap caused by the substantial geometric differences between human and animal faces.
📝 Abstract
We present a zero-shot framework for transferring human facial expressions to 3D animal face meshes. Our method combines intrinsic geometric descriptors (HKS/WKS) with a mesh-agnostic latent embedding that disentangles facial identity and expression. The ID latent space captures species-independent facial structure, while the expression latent space encodes deformation patterns that generalize across humans and animals. Trained only with human expression pairs, the model learns the embeddings, decoupling, and recoupling of cross-identity expressions, enabling expression transfer without requiring animal expression data. To enforce geometric consistency, we employ Jacobian loss together with vertex-position and Laplacian losses. Experiments show that our approach achieves plausible cross-species expression transfer, effectively narrowing the geometric gap between human and animal facial shapes.