🤖 AI Summary
This work addresses the online recognition of multi-instance micro-gestures in untrimmed videos, aiming to precisely localize their spatiotemporal boundaries and discriminate fine-grained categories. The task is challenging due to the temporal density of micro-gestures, ambiguous spatiotemporal boundaries, and minimal inter-class discriminability. To tackle these challenges, we propose an end-to-end framework integrating handcrafted data augmentation with spatiotemporal attention mechanisms: motion-sensitive augmentation enhances modeling of subtle dynamic changes, while a joint spatial-temporal attention module enables focal region selection and discriminative temporal dynamics learning. Evaluated on the IJCAI 2025 MiGA Challenge, our method achieves a state-of-the-art F1-score of 38.03—surpassing the prior best by 37.9%—demonstrating significant improvements in both accuracy and robustness for online micro-gesture detection.
📝 Abstract
In this paper, we introduce the latest solution developed by our team, HFUT-VUT, for the Micro-gesture Online Recognition track of the IJCAI 2025 MiGA Challenge. The Micro-gesture Online Recognition task is a highly challenging problem that aims to locate the temporal positions and recognize the categories of multiple micro-gesture instances in untrimmed videos. Compared to traditional temporal action detection, this task places greater emphasis on distinguishing between micro-gesture categories and precisely identifying the start and end times of each instance. Moreover, micro-gestures are typically spontaneous human actions, with greater differences than those found in other human actions. To address these challenges, we propose hand-crafted data augmentation and spatial-temporal attention to enhance the model's ability to classify and localize micro-gestures more accurately. Our solution achieved an F1 score of 38.03, outperforming the previous state-of-the-art by 37.9%. As a result, our method ranked first in the Micro-gesture Online Recognition track.