🤖 AI Summary
Reconstructing high-genus 3D surface meshes from multi-view images is often hindered by geometric, appearance, and topological ambiguities, leading to tunnel collapse or loss of structural details. This work proposes a mesh-based inverse rendering framework that, for the first time, integrates persistent homology priors into inverse rendering to explicitly model topological features such as tunnel and handle loops. By combining these topological constraints with multi-view photometric consistency, the method performs gradient-based optimization to preserve complex high-genus structures. Experiments demonstrate that the approach significantly outperforms existing mesh reconstruction techniques in terms of both Chamfer Distance and Volume IoU, achieving superior geometric accuracy and topological robustness.
📝 Abstract
Reconstructing 3D objects from images is inherently an ill-posed problem due to ambiguities in geometry, appearance, and topology. This paper introduces collaborative inverse rendering with persistent homology priors, a novel strategy that leverages topological constraints to resolve these ambiguities. By incorporating priors that capture critical features such as tunnel loops and handle loops, our approach directly addresses the difficulty of reconstructing high-genus surfaces. The collaboration between photometric consistency from multi-view images and homology-based guidance enables recovery of complex high-genus geometry while circumventing catastrophic failures such as collapsing tunnels or losing high-genus structure. Instead of neural networks, our method relies on gradient-based optimization within a mesh-based inverse rendering framework to highlight the role of topological priors. Experimental results show that incorporating persistent homology priors leads to lower Chamfer Distance (CD) and higher Volume IoU compared to state-of-the-art mesh-based methods, demonstrating improved geometric accuracy and robustness against topological failure.