🤖 AI Summary
This work addresses key challenges in online action detection for streaming video—namely, real-time inference, zero-/few-shot generalization, robustness to background noise, and handling of incomplete actions. To this end, we propose TOAD, a novel framework that pioneers the integration of CLIP text embeddings into online action detection. TOAD employs a lightweight temporal modeling module and a vision–language contrastive alignment mechanism, enabling efficient zero-/few-shot transfer without fine-tuning the visual backbone. This design drastically reduces computational overhead, making it suitable for resource-constrained deployment. On THUMOS14, TOAD achieves 82.46% mAP, outperforming prior state-of-the-art methods. Moreover, we establish the first zero-shot and few-shot action detection benchmarks on THUMOS14 and TVSeries, advancing open-vocabulary action understanding in streaming scenarios.
📝 Abstract
Detecting actions as they occur is essential for applications like video surveillance, autonomous driving, and human-robot interaction. Known as online action detection, this task requires classifying actions in streaming videos, handling background noise, and coping with incomplete actions. Transformer architectures are the current state-of-the-art, yet the potential of recent advancements in computer vision, particularly vision-language models (VLMs), remains largely untapped for this problem, partly due to high computational costs. In this paper, we introduce TOAD: a Text-driven Online Action Detection architecture that supports zero-shot and few-shot learning. TOAD leverages CLIP (Contrastive Language-Image Pretraining) textual embeddings, enabling efficient use of VLMs without significant computational overhead. Our model achieves 82.46% mAP on the THUMOS14 dataset, outperforming existing methods, and sets new baselines for zero-shot and few-shot performance on the THUMOS14 and TVSeries datasets.