ControlHair: Physically-based Video Diffusion for Controllable Dynamic Hair Rendering

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Hair dynamics simulation and rendering remain highly challenging due to material diversity, complex light transport, and difficulties in physically accurate modeling. Existing video diffusion models lack fine-grained controllability over hair motion. This paper introduces the first physics-informed, controllable dynamic hair video generation framework, structured as a three-stage pipeline: physics-based simulation → control signal extraction → conditional diffusion-based generation. Physical parameters are encoded into frame-level geometric representations, explicitly decoupling physical reasoning from visual synthesis to ensure precise dynamical control. Trained on a 10K-video dataset, our method significantly outperforms text- and pose-conditioned baselines in both visual fidelity and control accuracy. It enables novel applications including dynamic hairstyle virtual try-on, cinematic bullet-time effects, and high-fidelity static-frame generation for film production.

Technology Category

Application Category

📝 Abstract
Hair simulation and rendering are challenging due to complex strand dynamics, diverse material properties, and intricate light-hair interactions. Recent video diffusion models can generate high-quality videos, but they lack fine-grained control over hair dynamics. We present ControlHair, a hybrid framework that integrates a physics simulator with conditional video diffusion to enable controllable dynamic hair rendering. ControlHair adopts a three-stage pipeline: it first encodes physics parameters (e.g., hair stiffness, wind) into per-frame geometry using a simulator, then extracts per-frame control signals, and finally feeds control signals into a video diffusion model to generate videos with desired hair dynamics. This cascaded design decouples physics reasoning from video generation, supports diverse physics, and makes training the video diffusion model easy. Trained on a curated 10K video dataset, ControlHair outperforms text- and pose-conditioned baselines, delivering precisely controlled hair dynamics. We further demonstrate three use cases of ControlHair: dynamic hairstyle try-on, bullet-time effects, and cinemagraphic. ControlHair introduces the first physics-informed video diffusion framework for controllable dynamics. We provide a teaser video and experimental results on our website.
Problem

Research questions and friction points this paper is trying to address.

Achieving fine-grained control over dynamic hair rendering
Integrating physics simulation with video diffusion models
Decoupling physics reasoning from video generation process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid physics simulator with video diffusion
Three-stage pipeline for controllable hair rendering
Physics-informed framework decouples dynamics from generation
🔎 Similar Papers
No similar papers found.
Weikai Lin
Weikai Lin
University of Rochester
Computer Science
H
Haoxiang Li
Pixocial Technology
Y
Yuhao Zhu
University of Rochester