Every Angle Is Worth A Second Glance: Mining Kinematic Skeletal Structures from Multi-view Joint Cloud

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of 3D joint correspondence and identity association in multi-person motion capture under multi-view sparse observations—exacerbated by severe self- and inter-occlusion—this paper proposes a Joint Cloud construction framework. For each view, 2D joints of the same semantic type are independently triangulated to generate candidate point clouds; these are then fused via our novel Joint Cloud Selection and Aggregation Transformer (JCSAT), which jointly enforces trajectory consistency, skeletal structural priors, and multi-view geometric constraints. The core Optimal Token Attention Path (OTAP) module enables robust token-level selection and feature aggregation. Evaluated on the newly released, high-difficulty BUMocap-X dataset and multiple established benchmarks, our method achieves significant improvements over state-of-the-art approaches, reducing 3D MPJPE error by 18.7% under severe occlusion scenarios.

Technology Category

Application Category

📝 Abstract
Multi-person motion capture over sparse angular observations is a challenging problem under interference from both self- and mutual-occlusions. Existing works produce accurate 2D joint detection, however, when these are triangulated and lifted into 3D, available solutions all struggle in selecting the most accurate candidates and associating them to the correct joint type and target identity. As such, in order to fully utilize all accurate 2D joint location information, we propose to independently triangulate between all same-typed 2D joints from all camera views regardless of their target ID, forming the Joint Cloud. Joint Cloud consist of both valid joints lifted from the same joint type and target ID, as well as falsely constructed ones that are from different 2D sources. These redundant and inaccurate candidates are processed over the proposed Joint Cloud Selection and Aggregation Transformer (JCSAT) involving three cascaded encoders which deeply explore the trajectile, skeletal structural, and view-dependent correlations among all 3D point candidates in the cross-embedding space. An Optimal Token Attention Path (OTAP) module is proposed which subsequently selects and aggregates informative features from these redundant observations for the final prediction of human motion. To demonstrate the effectiveness of JCSAT, we build and publish a new multi-person motion capture dataset BUMocap-X with complex interactions and severe occlusions. Comprehensive experiments over the newly presented as well as benchmark datasets validate the effectiveness of the proposed framework, which outperforms all existing state-of-the-art methods, especially under challenging occlusion scenarios.
Problem

Research questions and friction points this paper is trying to address.

Multi-person motion capture
Self- and mutual-occlusions
3D joint selection and association
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint Cloud triangulation
JCSAT Transformer
OTAP module
🔎 Similar Papers
No similar papers found.
Junkun Jiang
Junkun Jiang
Hong Kong Baptist University
Computer VisionHuman Pose EstimationMotion Capture
J
Jie Chen
Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China
H
Ho Yin Au
Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China
Mingyuan Chen
Mingyuan Chen
Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China
W
Wei Xue
Division of Emerging Interdisciplinary Areas, the Hong Kong University of Science and Technology, Hong Kong SAR, China
Y
Yike Guo
Department of Computer Science and Engineering, the Hong Kong University of Science and Technology, Hong Kong SAR, China