🤖 AI Summary
This work proposes PolyFit, a compact and learnable surface representation that overcomes the computational expense and limited generalization of traditional per-vertex deformation modeling. PolyFit uniquely integrates local jet fitting with data-driven surface modeling by fitting jet functions over local surface patches. The method can be efficiently trained—either supervised or self-supervised—from analytic functions or real-world data and supports test-time optimization. In Shape-from-Template tasks, PolyFit achieves accuracy comparable to physics-based solvers while offering significantly faster computation. On garment draping benchmarks, it accelerates inference by an order of magnitude over strong baselines and demonstrates robust generalization across varying mesh resolutions and garment types.
📝 Abstract
In this paper, we present a patch-based representation of surfaces, PolyFit, which is obtained by fitting jet functions locally on surface patches. Such a representation can be learned efficiently in a supervised fashion from both analytic functions and real data. Once learned, it can be generalized to various types of surfaces. Using PolyFit, the surfaces can be efficiently deformed by updating a compact set of jet coefficients rather than optimizing per-vertex degrees of freedom for many downstream tasks in computer vision and graphics. We demonstrate the capabilities of our proposed methodologies with two applications: 1) Shape-from-template (SfT): where the goal is to deform the input 3D template of an object as seen in image/video. Using PolyFit, we adopt test-time optimization that delivers competitive accuracy while being markedly faster than offline physics-based solvers, and outperforms recent physics-guided neural simulators in accuracy at modest additional runtime. 2) Garment draping. We train a self-supervised, mesh- and garment-agnostic model that generalizes across resolutions and garment types, delivering up to an order-of-magnitude faster inference than strong baselines.