Thin-Shell-SfT: Fine-Grained Monocular Non-rigid 3D Surface Tracking with Neural Deformation Fields

๐Ÿ“… 2025-03-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Reconstructing highly deformable thin-shell objects (e.g., cloth) from monocular RGB video remains challenging due to difficulties in recovering sub-millimeter wrinkle details, error accumulation over time, and weak rendering gradients. Method: We propose the first physics-informed implicit continuous spatiotemporal neural field framework. Specifically: (i) we formulate a continuous neural deformation field regularized by Kirchhoffโ€“Love thin-shell mechanics; (ii) we design a surface-guided differentiable 3D Gaussian splatting renderer to enhance geometric gradient quality; and (iii) we adopt an analysis-by-synthesis joint optimization strategy. Results: Extensive experiments demonstrate that our method significantly outperforms state-of-the-art approaches both qualitatively and quantitatively. Without depth supervision, it achieves high temporal consistency and fidelity in non-rigid surface tracking, accurately reconstructing dynamic sub-centimeter-scale wrinkles and fine-scale topological details.

Technology Category

Application Category

๐Ÿ“ Abstract
3D reconstruction of highly deformable surfaces (e.g. cloths) from monocular RGB videos is a challenging problem, and no solution provides a consistent and accurate recovery of fine-grained surface details. To account for the ill-posed nature of the setting, existing methods use deformation models with statistical, neural, or physical priors. They also predominantly rely on nonadaptive discrete surface representations (e.g. polygonal meshes), perform frame-by-frame optimisation leading to error propagation, and suffer from poor gradients of the mesh-based differentiable renderers. Consequently, fine surface details such as cloth wrinkles are often not recovered with the desired accuracy. In response to these limitations, we propose ThinShell-SfT, a new method for non-rigid 3D tracking that represents a surface as an implicit and continuous spatiotemporal neural field. We incorporate continuous thin shell physics prior based on the Kirchhoff-Love model for spatial regularisation, which starkly contrasts the discretised alternatives of earlier works. Lastly, we leverage 3D Gaussian splatting to differentiably render the surface into image space and optimise the deformations based on analysis-bysynthesis principles. Our Thin-Shell-SfT outperforms prior works qualitatively and quantitatively thanks to our continuous surface formulation in conjunction with a specially tailored simulation prior and surface-induced 3D Gaussians. See our project page at https://4dqv.mpiinf.mpg.de/ThinShellSfT.
Problem

Research questions and friction points this paper is trying to address.

Monocular 3D tracking of highly deformable surfaces like cloth
Recovering fine-grained details such as cloth wrinkles accurately
Overcoming limitations of discrete representations and error propagation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses implicit continuous neural deformation fields
Incorporates Kirchhoff-Love thin shell physics
Leverages 3D Gaussian splatting for rendering
๐Ÿ”Ž Similar Papers
No similar papers found.