Collaborative Learning for 3D Hand-Object Reconstruction and Compositional Action Recognition from Egocentric RGB Videos Using Superquadrics

📅 2025-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods exhibit weak generalization to unseen objects when recognizing familiar actions, particularly failing to jointly achieve 3D hand–object reconstruction and continuous action recognition. Method: We propose the first template-free framework for hand–object co-understanding from egocentric RGB videos. It introduces superquadrics into hand–object interaction modeling to enable parametric, differentiable object geometry reconstruction; designs a geometric relation attention mechanism to explicitly encode spatial hand–object constraints; and establishes the first benchmark split for verb–noun compositional generalization, extended from H2O/FPHA. Results: Our method significantly outperforms state-of-the-art approaches on compositional action recognition while improving 3D hand–object pose estimation accuracy, demonstrating that geometric priors effectively disentangle action semantics from object appearance.

Technology Category

Application Category

📝 Abstract
With the availability of egocentric 3D hand-object interaction datasets, there is increasing interest in developing unified models for hand-object pose estimation and action recognition. However, existing methods still struggle to recognise seen actions on unseen objects due to the limitations in representing object shape and movement using 3D bounding boxes. Additionally, the reliance on object templates at test time limits their generalisability to unseen objects. To address these challenges, we propose to leverage superquadrics as an alternative 3D object representation to bounding boxes and demonstrate their effectiveness on both template-free object reconstruction and action recognition tasks. Moreover, as we find that pure appearance-based methods can outperform the unified methods, the potential benefits from 3D geometric information remain unclear. Therefore, we study the compositionality of actions by considering a more challenging task where the training combinations of verbs and nouns do not overlap with the testing split. We extend H2O and FPHA datasets with compositional splits and design a novel collaborative learning framework that can explicitly reason about the geometric relations between hands and the manipulated object. Through extensive quantitative and qualitative evaluations, we demonstrate significant improvements over the state-of-the-arts in (compositional) action recognition.
Problem

Research questions and friction points this paper is trying to address.

Unseen Object Recognition
3D Reconstruction
Action Identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hypercube Representation
Appearance-based Action Recognition
Novel Action Combinations
🔎 Similar Papers
No similar papers found.