Instant Expressive Gaussian Head Avatar via 3D-Aware Expression Distillation

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 2D portrait animation methods deliver rich visual details but suffer from poor 3D consistency and real-time performance; while 3D-aware feed-forward approaches achieve high speed and geometric coherence, they lack expressive facial dynamics. To bridge this gap, we propose the first 3D-aware expression distillation framework that transfers fine-grained expression knowledge from a 2D video diffusion model to a lightweight feed-forward encoder, enabling end-to-end, instantaneous conversion from a single in-the-wild face image to a 3D Gaussian avatar. Our method decouples motion learning from 3D structural representation, eliminates reliance on predefined parametric models, and replaces global attention with a local feature fusion mechanism. Leveraging explicit 3D Gaussian splatting for rendering, our approach achieves state-of-the-art expression fidelity while maintaining real-time performance at 107.31 FPS—establishing a new Pareto-optimal trade-off between quality and speed.

Technology Category

Application Category

📝 Abstract
Portrait animation has witnessed tremendous quality improvements thanks to recent advances in video diffusion models. However, these 2D methods often compromise 3D consistency and speed, limiting their applicability in real-world scenarios, such as digital twins or telepresence. In contrast, 3D-aware facial animation feedforward methods -- built upon explicit 3D representations, such as neural radiance fields or Gaussian splatting -- ensure 3D consistency and achieve faster inference speed, but come with inferior expression details. In this paper, we aim to combine their strengths by distilling knowledge from a 2D diffusion-based method into a feed-forward encoder, which instantly converts an in-the-wild single image into a 3D-consistent, fast yet expressive animatable representation. Our animation representation is decoupled from the face's 3D representation and learns motion implicitly from data, eliminating the dependency on pre-defined parametric models that often constrain animation capabilities. Unlike previous computationally intensive global fusion mechanisms (e.g., multiple attention layers) for fusing 3D structural and animation information, our design employs an efficient lightweight local fusion strategy to achieve high animation expressivity. As a result, our method runs at 107.31 FPS for animation and pose control while achieving comparable animation quality to the state-of-the-art, surpassing alternative designs that trade speed for quality or vice versa. Project website is https://research.nvidia.com/labs/amri/projects/instant4d
Problem

Research questions and friction points this paper is trying to address.

Distill 2D diffusion knowledge into 3D-aware avatar for expressive animation
Decouple animation from 3D representation to avoid parametric model constraints
Use efficient local fusion for high expressivity without sacrificing speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distills 2D diffusion knowledge into feed-forward encoder
Decouples animation from 3D representation for implicit motion learning
Uses efficient local fusion for high expressivity and speed
🔎 Similar Papers
No similar papers found.