Text-driven Online Action Detection

📅 2025-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in online action detection for streaming video—namely, real-time inference, zero-/few-shot generalization, robustness to background noise, and handling of incomplete actions. To this end, we propose TOAD, a novel framework that pioneers the integration of CLIP text embeddings into online action detection. TOAD employs a lightweight temporal modeling module and a vision–language contrastive alignment mechanism, enabling efficient zero-/few-shot transfer without fine-tuning the visual backbone. This design drastically reduces computational overhead, making it suitable for resource-constrained deployment. On THUMOS14, TOAD achieves 82.46% mAP, outperforming prior state-of-the-art methods. Moreover, we establish the first zero-shot and few-shot action detection benchmarks on THUMOS14 and TVSeries, advancing open-vocabulary action understanding in streaming scenarios.

Technology Category

Application Category

📝 Abstract
Detecting actions as they occur is essential for applications like video surveillance, autonomous driving, and human-robot interaction. Known as online action detection, this task requires classifying actions in streaming videos, handling background noise, and coping with incomplete actions. Transformer architectures are the current state-of-the-art, yet the potential of recent advancements in computer vision, particularly vision-language models (VLMs), remains largely untapped for this problem, partly due to high computational costs. In this paper, we introduce TOAD: a Text-driven Online Action Detection architecture that supports zero-shot and few-shot learning. TOAD leverages CLIP (Contrastive Language-Image Pretraining) textual embeddings, enabling efficient use of VLMs without significant computational overhead. Our model achieves 82.46% mAP on the THUMOS14 dataset, outperforming existing methods, and sets new baselines for zero-shot and few-shot performance on the THUMOS14 and TVSeries datasets.
Problem

Research questions and friction points this paper is trying to address.

Online Action Detection
Visual Language Models (VLMs)
Real-time Recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Language Models
zero-shot action recognition
CLIP technology
🔎 Similar Papers
No similar papers found.
M
Manuel Benavent-Lledó
Department of Computer Technology, University of Alicante, Spain
D
David Mulero-Pérez
Department of Computer Technology, University of Alicante, Spain
David Ortiz-Perez
David Ortiz-Perez
PhD Student, University of Alicante
Deep LearningComputer VisionMultimodal
J
Jose Garcia-Rodriguez
Department of Computer Technology, University of Alicante, Spain; ValgrAI - Valencian Graduate School and Research Network of Artificial Intelligence, Valencia, Spain; Institute of Informatics Research, University of Alicante, Alicante, Spain