π€ AI Summary
This work addresses the significant performance degradation of 2D/3D human pose estimation in crowded scenes by proposing BBoxMaskPose v2, a novel framework that integrates probabilistic 2D pose estimation (PMPose), SAM-based mask refinement, a 2D-to-3D prompting mechanism, and mutual conditional modeling. The method achieves a breakthrough on the OCHuman dataset, surpassing 50 AP for the first timeβan improvement of 6 APβand gains 1.5 AP on COCO, establishing itself as the first approach to exceed 50 AP on OCHuman. Furthermore, the study introduces a new benchmark, OCHuman-Pose, which reveals that multi-person pose estimation performance is more dependent on pose prediction accuracy than on detection quality, thereby highlighting the critical role of precise 2D pose estimation in enabling robust 3D pose recovery.
π Abstract
Most 2D human pose estimation benchmarks are nearly saturated, with the exception of crowded scenes. We introduce PMPose, a top-down 2D pose estimator that incorporates the probabilistic formulation and the mask-conditioning. PMPose improves crowded pose estimation without sacrificing performance on standard scenes. Building on this, we present BBoxMaskPose v2 (BMPv2) integrating PMPose and an enhanced SAM-based mask refinement module. BMPv2 surpasses state-of-the-art by 1.5 average precision (AP) points on COCO and 6 AP points on OCHuman, becoming the first method to exceed 50 AP on OCHuman. We demonstrate that BMP's 2D prompting of 3D model improves 3D pose estimation in crowded scenes and that advances in 2D pose quality directly benefit 3D estimation. Results on the new OCHuman-Pose dataset show that multi-person performance is more affected by pose prediction accuracy than by detection. The code, models, and data are available on https://MiraPurkrabek.github.io/BBox-Mask-Pose/.