Real-Time Manipulation Action Recognition with a Factorized Graph Sequence Encoder

📅 2025-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-time hand-action recognition in human-robot collaboration suffers from poor temporal scalability and limited long-sequence generalization of lightweight models. To address this, we propose the Factorized Analytic Graph Sequence Encoder (FA-GSE), which introduces Hand Pooling—a novel graph-level pooling operation—enabling efficient long-term temporal modeling without compromising latency. FA-GSE integrates scene graph representation, factorized temporal encoding, graph-level pooling, and a lightweight embedding design to jointly ensure real-time inference and robust feature representation. On the Bimacs and CoAx benchmarks, FA-GSE achieves +14.3% and +5.6% improvements in macro-F1 score, respectively, significantly outperforming existing real-time methods. Ablation studies confirm the effectiveness and synergistic contributions of each component.

Technology Category

Application Category

📝 Abstract
Recognition of human manipulation actions in real-time is essential for safe and effective human-robot interaction and collaboration. The challenge lies in developing a model that is both lightweight enough for real-time execution and capable of generalization. While some existing methods in the literature can run in real-time, they struggle with temporal scalability, i.e., they fail to adapt to long-duration manipulations effectively. To address this, leveraging the generalizable scene graph representations, we propose a new Factorized Graph Sequence Encoder network that not only runs in real-time but also scales effectively in the temporal dimension, thanks to its factorized encoder architecture. Additionally, we introduce Hand Pooling operation, a simple pooling operation for more focused extraction of the graph-level embeddings. Our model outperforms the previous state-of-the-art real-time approach, achieving a 14.3% and 5.6% improvement in F1-macro score on the KIT Bimanual Action (Bimacs) Dataset and Collaborative Action (CoAx) Dataset, respectively. Moreover, we conduct an extensive ablation study to validate our network design choices. Finally, we compare our model with its architecturally similar RGB-based model on the Bimacs dataset and show the limitations of this model in contrast to ours on such an object-centric manipulation dataset.
Problem

Research questions and friction points this paper is trying to address.

Real-time recognition of human manipulation actions
Lightweight model for real-time execution and generalization
Effective temporal scalability for long-duration manipulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Factorized Graph Sequence Encoder for real-time action recognition
Hand Pooling operation for focused graph-level embeddings
Improved F1-macro scores on Bimacs and CoAx datasets
🔎 Similar Papers
No similar papers found.