🤖 AI Summary
This work addresses the challenges of semantic ambiguity in static prompts and high computational overhead in video anomaly detection, stemming from the open-ended nature of anomalies. To overcome these issues, the authors propose a training-free agent framework that leverages iterative multi-turn dialogues between a Vision-Language Model (VLM) and a Large Language Model (LLM) to dynamically refine query prompts. This context-aware prompt updating mechanism effectively unlocks the reasoning capabilities of lightweight models. The method achieves state-of-the-art performance on UCF-Crime, XD-Violence, and UBNormal benchmarks, demonstrates strong generalization on ComplexVAD, and significantly reduces computational costs, making it suitable for edge deployment.
📝 Abstract
Video Anomaly Detection (VAD) is a fundamental challenge in computer vision, particularly due to the open-set nature of anomalies. While recent training-free approaches utilizing Vision-Language Models (VLMs) have shown promise, they typically rely on massive, resource-intensive foundation models to compensate for the ambiguity of static prompts. We argue that the bottleneck in VAD is not necessarily model capacity, but rather the static nature of inquiry. We propose QVAD, a question-centric agentic framework that treats VLM-LLM interaction as a dynamic dialogue. By iteratively refining queries based on visual context, our LLM agent guides smaller VLMs to produce high-fidelity captions and precise semantic reasoning without parameter updates. This ``prompt-updating" mechanism effectively unlocks the latent capabilities of lightweight models, enabling state-of-the-art performance on UCF-Crime, XD-Violence, and UBNormal using a fraction of the parameters required by competing methods. We further demonstrate exceptional generalizability on the single-scene ComplexVAD dataset. Crucially, QVAD achieves high inference speeds with minimal memory footprints, making advanced VAD capabilities deployable on resource-constrained edge devices.