🤖 AI Summary
To address the challenges of 3D foot reconstruction in self-scanning scenarios—where limited user mobility causes missing critical regions (e.g., arches and heels) and anatomical variability degrades reconstruction fidelity—this paper proposes an end-to-end, markerless, single-view reconstruction method. First, a viewpoint prediction module is designed using SE(3) group manifold normalization to resolve ambiguities inherent in Structure-from-Motion (SfM) alignment. Second, an attention-driven point cloud completion network is introduced, integrating synthetically augmented data with implicit geometric priors. Third, an anatomical fidelity constraint is incorporated to ensure clinical plausibility. Quantitative and qualitative evaluations demonstrate state-of-the-art reconstruction accuracy while preserving anatomical correctness. Moreover, the method achieves lightweight deployment on mobile devices, enabling practical clinical and consumer applications.
📝 Abstract
Accurate 3D foot reconstruction is crucial for personalized orthotics, digital healthcare, and virtual fittings. However, existing methods struggle with incomplete scans and anatomical variations, particularly in self-scanning scenarios where user mobility is limited, making it difficult to capture areas like the arch and heel. We present a novel end-to-end pipeline that refines Structure-from-Motion (SfM) reconstruction. It first resolves scan alignment ambiguities using SE(3) canonicalization with a viewpoint prediction module, then completes missing geometry through an attention-based network trained on synthetically augmented point clouds. Our approach achieves state-of-the-art performance on reconstruction metrics while preserving clinically validated anatomical fidelity. By combining synthetic training data with learned geometric priors, we enable robust foot reconstruction under real-world capture conditions, unlocking new opportunities for mobile-based 3D scanning in healthcare and retail.