🤖 AI Summary
To address imprecise localization caused by action-background ambiguity in weakly supervised temporal action localization (WS-TAL), this paper proposes a dual-stream uncertainty-aware framework. Methodologically, it fuses RGB and optical flow features and introduces a hybrid multi-head attention (HMHA) module to adaptively suppress background noise. Furthermore, a generalized uncertainty evidence fusion (GUEF) mechanism is incorporated to model epistemic uncertainty at the snippet level, thereby calibrating feature distributions and enhancing the credibility of foreground evidence. Crucially, the framework innovatively integrates evidential reasoning with dual-stream attention for end-to-end action instance localization and classification. Evaluated on THUMOS14, it significantly outperforms state-of-the-art methods, achieving a 2.3% improvement in mAP@0.5. The source code is publicly available.
📝 Abstract
Weakly supervised temporal action localization (WS-TAL) is a task of targeting at localizing complete action instances and categorizing them with video-level labels. Action-background ambiguity, primarily caused by background noise resulting from aggregation and intra-action variation, is a significant challenge for existing WS-TAL methods. In this paper, we introduce a hybrid multi-head attention (HMHA) module and generalized uncertainty-based evidential fusion (GUEF) module to address the problem. The proposed HMHA effectively enhances RGB and optical flow features by filtering redundant information and adjusting their feature distribution to better align with the WS-TAL task. Additionally, the proposed GUEF adaptively eliminates the interference of background noise by fusing snippet-level evidences to refine uncertainty measurement and select superior foreground feature information, which enables the model to concentrate on integral action instances to achieve better action localization and classification performance. Experimental results conducted on the THUMOS14 dataset demonstrate that our method outperforms state-of-the-art methods. Our code is available in https://github.com/heyuanpengpku/GUEF/tree/main.