🤖 AI Summary
This work addresses the limitation of multi-view 3D human reconstruction methods that typically require precise camera calibration, hindering their applicability in real-world scenarios. The paper proposes the first calibration-free, feed-forward framework capable of reconstructing SMPL parameters and world-coordinate positions of arbitrary humans from uncalibrated multi-view images. Key innovations include a learnable identity query with soft assignment for cross-view person association and the integration of multi-view geometric triangulation to resolve depth ambiguity. The model is jointly optimized through a Cross-View Identity Association module, contrastive learning supervision, and a cross-view reprojection loss. Evaluated on EgoHumans and EgoExo4D datasets, the method achieves state-of-the-art performance in both 3D reconstruction accuracy and camera pose estimation, while running 180× faster than optimization-based approaches at inference time.
📝 Abstract
Reconstructing 3D humans from images captured at multiple perspectives typically requires pre-calibration, like using checkerboards or MVS algorithms, which limits scalability and applicability in diverse real-world scenarios. In this work, we present \textbf{AHAP} (Reconstructing \textbf{A}rbitrary \textbf{H}umans from \textbf{A}rbitrary \textbf{P}erspectives), a feed-forward framework for reconstructing arbitrary humans from arbitrary camera perspectives without requiring camera calibration. Our core lies in the effective fusion of multi-view geometry to assist human association, reconstruction and localization. Specifically, we use a Cross-View Identity Association module through learnable person queries and soft assignment, supervised by contrastive learning to resolve cross-view human identity association. A Human Head fuses cross-view features and scene context for SMPL prediction, guided by cross-view reprojection losses to enforce body pose consistency. Additionally, multi-view geometry eliminates the depth ambiguity inherent in monocular methods, providing more precise 3D human localization through multi-view triangulation. Experiments on EgoHumans and EgoExo4D demonstrate that AHAP achieves competitive performance on both world-space human reconstruction and camera pose estimation, while being 180$\times$ faster than optimization-based approaches.