Making Every Frame Matter: Continuous Activity Recognition in Streaming Video via Adaptive Video Context Modeling

πŸ“… 2024-10-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of continuous recognition of multi-scale, untrimmed activities in streaming video, this paper proposes the CARS systemβ€”the first to introduce activity-space feature selection and activity-aware state updating for adaptive video context modeling. The method integrates lightweight spatial feature distillation, dynamic temporal state updating, and adaptive context compression, enabling end-to-end deployment on edge devices. It achieves >30 FPS inference speed on edge hardware while improving accuracy by 1.2–79.7 percentage points over baselines. As a video encoder, it boosts large-model performance by 0.46 points (on a 5-point scale) on in-distribution datasets and enhances zero-shot task accuracy by 1.19–4%. The core contribution is an efficient, deployable dynamic spatiotemporal context modeling framework that jointly optimizes accuracy, latency, and generalization across diverse activity recognition scenarios.

Technology Category

Application Category

πŸ“ Abstract
Video activity recognition has become increasingly important in robots and embodied AI. Recognizing continuous video activities poses considerable challenges due to the fast expansion of streaming video, which contains multi-scale and untrimmed activities. We introduce a novel system, CARS, to overcome these issues through adaptive video context modeling. Adaptive video context modeling refers to selectively maintaining activity-related features in temporal and spatial dimensions. CARS has two key designs. The first is an activity spatial feature extraction by eliminating irrelevant visual features while maintaining recognition accuracy. The second is an activity-aware state update introducing dynamic adaptability to better preserve the video context for multi-scale activity recognition. Our CARS runs at speeds $>$30 FPS on typical edge devices and outperforms all baselines by 1.2% to 79.7% in accuracy. Moreover, we explore applying CARS to a large video model as a video encoder. Experimental results show that our CARS can result in a 0.46-point enhancement (on a 5-point scale) on the in-distribution video activity dataset, and an improvement ranging from 1.19% to 4% on zero-shot video activity datasets.
Problem

Research questions and friction points this paper is trying to address.

Recognizing continuous activities in streaming video.
Handling multi-scale and untrimmed video activities.
Improving accuracy and speed in video activity recognition.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive video context modeling for streaming video
Activity spatial feature extraction with high accuracy
Dynamic adaptability for multi-scale activity recognition
πŸ”Ž Similar Papers
No similar papers found.
H
Hao Wu
Nanjing University
D
Donglin Bai
Microsoft Research
S
Shiqi Jiang
Microsoft Research
Qianxi Zhang
Qianxi Zhang
MSRA
database
Y
Yifan Yang
Microsoft Research
T
Ting Cao
Microsoft Research
F
Fengyuan Xu
Nanjing University