🤖 AI Summary
Monocular video-driven dynamic clothing reconstruction suffers from geometric detail loss and deformation artifacts: implicit volumetric methods struggle to capture high-frequency wrinkles, while template-based displacement approaches often introduce mesh distortions. This paper proposes a neural gradient-driven explicit mesh deformation framework that replaces vertex displacement with differentiable geometric gradients to eliminate deformation artifacts. We design an adaptive remeshing strategy to accurately capture dynamic surface details—such as flowing skirts—and jointly optimize dynamic texture mapping with differentiable rendering to achieve frame-wise high-fidelity recovery of illumination, shadows, and fabric textures. Our method end-to-end integrates the strengths of implicit modeling and explicit deformation. It significantly outperforms state-of-the-art approaches across multiple benchmarks, achieving a 2.1 dB PSNR improvement in geometric detail fidelity, alongside substantial gains in visual realism and motion consistency.
📝 Abstract
Dynamic garment reconstruction from monocular video is an important yet challenging task due to the complex dynamics and unconstrained nature of the garments. Recent advancements in neural rendering have enabled high-quality geometric reconstruction with image/video supervision. However, implicit representation methods that use volume rendering often provide smooth geometry and fail to model high-frequency details. While template reconstruction methods model explicit geometry, they use vertex displacement for deformation, which results in artifacts. Addressing these limitations, we propose NGD, a Neural Gradient-based Deformation method to reconstruct dynamically evolving textured garments from monocular videos. Additionally, we propose a novel adaptive remeshing strategy for modelling dynamically evolving surfaces like wrinkles and pleats of the skirt, leading to high-quality reconstruction. Finally, we learn dynamic texture maps to capture per-frame lighting and shadow effects. We provide extensive qualitative and quantitative evaluations to demonstrate significant improvements over existing SOTA methods and provide high-quality garment reconstructions.