π€ AI Summary
To address the challenge of continuous recognition of multi-scale, untrimmed activities in streaming video, this paper proposes the CARS systemβthe first to introduce activity-space feature selection and activity-aware state updating for adaptive video context modeling. The method integrates lightweight spatial feature distillation, dynamic temporal state updating, and adaptive context compression, enabling end-to-end deployment on edge devices. It achieves >30 FPS inference speed on edge hardware while improving accuracy by 1.2β79.7 percentage points over baselines. As a video encoder, it boosts large-model performance by 0.46 points (on a 5-point scale) on in-distribution datasets and enhances zero-shot task accuracy by 1.19β4%. The core contribution is an efficient, deployable dynamic spatiotemporal context modeling framework that jointly optimizes accuracy, latency, and generalization across diverse activity recognition scenarios.
π Abstract
Video activity recognition has become increasingly important in robots and embodied AI. Recognizing continuous video activities poses considerable challenges due to the fast expansion of streaming video, which contains multi-scale and untrimmed activities. We introduce a novel system, CARS, to overcome these issues through adaptive video context modeling. Adaptive video context modeling refers to selectively maintaining activity-related features in temporal and spatial dimensions. CARS has two key designs. The first is an activity spatial feature extraction by eliminating irrelevant visual features while maintaining recognition accuracy. The second is an activity-aware state update introducing dynamic adaptability to better preserve the video context for multi-scale activity recognition. Our CARS runs at speeds $>$30 FPS on typical edge devices and outperforms all baselines by 1.2% to 79.7% in accuracy. Moreover, we explore applying CARS to a large video model as a video encoder. Experimental results show that our CARS can result in a 0.46-point enhancement (on a 5-point scale) on the in-distribution video activity dataset, and an improvement ranging from 1.19% to 4% on zero-shot video activity datasets.